...

How to Use Gemini 3 Pro Efficiently


its latest LLM: Gemini 3. The model is long-awaited and has been widely discussed before its release. In this article, I’ll cover my first experience with the model and how it differs from other frontier LLMs.

The goal of the article is to share my first impressions when using Gemini 3, highlighting what works well and what doesn’t work well. I’ll highlight my experience using Gemini 3 in the console and while coding with it.

Learn the pros and cons of Gemini 3 Pro, from testing with both coding and console usage
This infographic highlights the main contents of this article. I’ll discuss my first impressions using Gemini 3, both through the Gemini console and from coding with it. I’ll highlight what I like about the model and the parts I dislike. Image by ChatGPT.

Why you should use Gemini 3

In my opinion, Gemini 2.5 pro was already the best conversational LLM available before the release of Gemini 3. The only area I believe another LLM was better at was Claude Sonnet 4.5 thinking, for coding.

The reason I believe Gemini 2.5 pro is the best non-coding LLM is due to its:

  • Ability to efficiently find the correct information
  • Low amount of hallucinations
  • Its willingness to disagree with me

I believe the last point is the most important. Some people want warm LLMs that feel good to talk to; however, I’d argue you (as a problem-solver) always want the opposite:

You want an LLM that goes straight to the point and is willing to say that you are wrong

My experience was that Gemini 2.5 was far better at this, compared to other LLMs such as GPT-5, Grok 4, and Claude Sonnet 4.5.

Considering Google, in my opinion, already had the best LLM out there, the release of a newer Gemini model is thus very interesting, and something I started testing right after release.


It’s worth pointing out that Google released Gemini 3 Pro, but has not yet released a flash alternative, though it’s natural to think such a model will be released soon.

I’m not endorsed by Google in the writing of this article.

Gemini 3 in the console

I first started testing Gemini 3 Pro in the console. The first thing that struck me was that it was relatively slow compared to Gemini 2.5 Pro. However, this is usually not an issue, as I mostly value intelligence over speed, of course, up to a certain threshold. Though Gemini 3 Pro is slower, I definitely wouldn’t say it’s too slow.

Another point I noticed is that when explaining, Gemini 3 creates or utilises more photos in its explanations. For example, when discussing EPC certificates with Gemini, the model found the image below:

This is an image of Gemini 3 Pro, which I used to answer my questions about EPC certificates. Image by Gemini 3 Pro

I also noticed it would sometimes generate images, even if I didn’t explicitly prompt for it. The image generation in the Gemini console is surprisingly fast.


The moment I was most impressed by Gemini 3’s capabilities was when I was analyzing the first research paper on diffusion models, and I discussed with Gemini to understand the paper. The model was, of course, good at reading the paper, including text, images, and equations; however, this is also a capability the other frontier models possess. I was most impressed when I was chatting with Gemini 3 about diffusion models, trying to understand them.

I made a misconception about the paper, thinking we were discussing conditional diffusion models, though we were in fact looking at unconditional diffusion. Note that I was discussing this before even knowing about the terms conditional and unconditional diffusion.

Gemini 3 then proceeded to call out that I was misunderstanding the concepts, efficiently understanding the real intent behind my question, and significantly helped me deepen my understanding of diffusion models.

This image highlights a good interaction with Gemini 3 Pro, where the model understood where I was misunderstanding the topic at hand and called it out. Being able to call out things like this is an important trait for LLMs, in my opinion. Image from Gemini.

I also took some of the older queries I ran in the Gemini console with Gemini 2.5 Pro, and ran the exact same queries again, this time using Gemini 3 Pro. They were usually broader questions, though not particularly difficult ones.

The responses I got were overall quite similar, though I did notice Gemini 3 was better at telling me things I didn’t know, or uncovering topics / areas I (or Gemini 2.5 Pro) hadn’t thought about before. I was, for example, discussing how I write articles, and what I can do to improve, where I believe Gemini 3 was better at providing feedback, and coming up with more creative approaches to improving my writing.


Thus, to sum it up, Gemini 3 in the console is:

  • A bit slow
  • Smart, and provides good explanations
  • Good at uncovering things I haven’t thought about, which is super helpful when dealing with problem-solving
  • Is willing to disagree with you, and help call out ambiguities, traits I believe are really important in an LLM assistant

Coding with Gemini 3

After working with Gemini 3 in the console, I started coding with it through Cursor. My overall experience is that it’s definitely a good model, though I still prefer Claude Sonnet 4.5 thinking as my main coding model. The main reason for this is that Gemini 3 too often comes up with more complex solutions and is a slower model. However, Gemini 3 is most definitely a very capable coding model that might be better for other coding use-cases than what I’m dealing with. I’m mostly coding infrastructure around AI agents and CDK stacks.

I tried Gemini 3 for coding in two main ways:

  • Making the game shown in this X post, from just a screenshot of the game
  • Coding some agentic infrastructure

First, I attempted to make the Game from the X post. On the first prompt, the model made a Pygame with all the squares, but it forgot to add all the sprites (art), the bar on the left side, and so on. Basically, it made a very minimalist version of the game.

I then wrote a follow-up prompt with the following:

Make it look properly like this game  with the design and everything. Use

Note: When coding, you should be way more specific in your instructions than my prompt above. I used this prompt because I was essentially vibing in the game, and wanted to see Gemini 3 Pro’s ability to create a game from scratch.

After running the prompt above, it made a working game, where the guests are walking around, I can buy pavements and different machines, and the game essentially works as expected. Very impressive!


I continued coding with Gemini 3, but this time on a more production-grade code base. My overall conclusion is that Gemini 3 Pro usually gets the job done, though I more often experience bloated or worse code than I do when using Claude 4.5 Sonnet. Additionally, Claude Sonnet 4.5 is quite a bit faster, making it the definite model of choice for me when coding. However, I would probably regard Gemini 3 Pro as the second-best coding model I’ve used.

I also think that which coding model is best highly depends on what you’re coding. In some situations, speed is more important. In particular forms of coding, another model might be better, and so on, so you should really try out the models yourself and see what works best for you. The price of using these models is going down rapidly, and you can easily revert any changes made, making it super cheap to test out different models.

It’s also worth mentioning that Google released a new IDE called Antigravity, though I haven’t tried it yet.

Overall impressions

My overall impression of Gemini 3 is good, and my updated LLM usage stack will look like this:

  • Claude 4.5 Sonnet thinking for coding
  • GPT-5 when I need quick answers to simple questions (the GPT-app works well to open with a shortcut).
  • GPT-5 when generating images
  • When I want more thorough answers and have longer discussions with an LLM about a topic, I’ll use Gemini 3. Typically, to learn new topics, discuss software architecture, or similar.

The pricing for Gemini 3 per million tokens looks like the following (per November 19, 2025, from Gemini Developer API Docs)

  • If you have less than 200k input tokens:
    • Input tokens: 2 USD
    • Output tokens: 12 USD
  • If you have more than 200k input tokens:
    • Input tokens: 4 USD
    • Output tokens: 18 USD

In conclusion, I have good first impressions from Gemini 3, and highly recommend checking it out.

👉 Find me on socials:

💻 My webinar on Vision Language Models

📩 Subscribe to my newsletter

🧑‍💻 Get in touch

🔗 LinkedIn

🐦 X / Twitter

✍️ Medium

You can also read my other articles:

Source link

#Gemini #Pro #Efficiently