What to know
- Google is building Gemini Nano right into Chrome that will allow users to run the LLM locally.
- Currently, this feature is only available on Chrome Canary and will require you to set it up right.
- Use the steps in the guide to set up built-in Gemini Nano in Chrome.
Google is in the process of adding Gemini Nano right into Chrome. With this change, you’ll have the Gemini LLM running within Chrome, allowing you to use it offline and get answers to your questions instantaneously.
However, since this is currently an experiment, it may be a while before you see it on the browser’s table build. But with Chrome Canary, you could have this up and running today. Here’s everything you need to know to set up Gemini Nano built right into Chrome.
How to set up Gemini Nano built into Chrome
- Firstly, download and install Chrome Canary if you haven’t already. Your Chrome Canary version should be 127 or above.
- Open
chrome://flags/#prompt
-api-for-gemini-nano
in Chrome Canary.
- Set it to Enabled. Wait, do NOT Relaunch Chrome Canary when prompted.
- Next, open
chrome://flags/#optimization
-guide-on-device-model
in Chrome Canary. - Set it to Enabled BypassPerfRequirement.
- Now Relaunch Chrome Canary.
- Open
chrome://components/
- Scroll down and look for Optimization Guide on Device Model. Make sure it’s fully downloaded. (If the version is 0.0.0.0, click on ‘Check for update’).
- Once the model is downloaded, open a webpage and press F12 to open the console.
- Check windows.ai in the console. If you don’t receive any errors, your setup is completed successfully.
Note: On Chrome Canary version 128, we could not get past step 10. However, several users have been able to set up Gemini Nano in Chrome using the same steps. So, do try your luck.