“It’s safe to say that the people who volunteered to “shape” the initiative want it dead and buried. Of the 52 responses at the time of writing, all rejected the idea and asked Mozilla to stop shoving AI features into Firefox.”
Hey all, just a reminder to keep the community rules in mind when commenting on this thread. Criticism in any direction is fine, but please maintain your civility and don’t stoop to ad-hominem etc. Thanks.
don’t stoop to ad-hominem
At this point Ad-hominem is practically the nice name for the business model “enshitification”.
You want AI in your browser? Just add <your favourite spying ad machine> as a “search engine” option, with a URL like
https://chatgpt.com/?q=%25s, with a shortcut like
@ai. You can then ask it anything right there in your search bar.Maybe also add one with a URL with some query pre-written like
https://chatgpt.com/?q=summarize this page for me: %sas
@aisor something, modern chatbots have the ability to make HTTP requests for you. Then if you want to summarize the page you’re on, you do Ctrl+L Ctrl+C @ais Ctrl+V Enter. There, I solved all your AI needs with 4 shortcuts without literally any client-side code.Emissions. Economic problems. Goading several people into suicide.
Like, if you ship a food item with harmful bacteria in it, it gets recalled. If you have a fatal design flaw in a car, it gets recalled. If your LLM encourages people to separate from their loved ones and kill themselves, nothing fucking happens.
The more AI is being pushed into my face, the more it pisses me off.
Mozilla could have made an extension and promote it on their extension store. Rather than adding cruft to their browser and turning it on by default.
The list of things to turn off to get a pleasant experience in Firefox is getting longer by the day. Not as bad as chrome, but still.
Oh this triggers me. There have been multiple good suggestions for Firefox in the past that are closed with nofix as “this can be provided by the community as an add-on”. Yet they shove the crappiest crap into the main browser now.
Switching to de-Mozilla’d Firefox (Waterfox) is as simple as copying your profile folder from FF to WF. Everything transfers over, and I mean everything. No mozilla corp, no opting out of shit in menus at all.
Just render the page, page renderer.
monkey paw: curls
Yes, the page has been rendered by a large webpage model based on the URL.
Those unhappy have another option: use an AI‑free Firefox fork such as LibreWolf, Waterfox, or Zen Browser.
And I have taken that other option.
Also: Vanadium and/or Ironfox on Android.
A fork is great, but the more a fork deviates, the more issues there are likely to be. Firefox is already at low enough numbers that it’s not really sustainable.
Then Mozilla should start listening to their users instead of driving them away. I know I stopped using Firefox after being a regular user since launch because the AI nonsense became the last sta straw.
Yes but we shouldn’t let perfect be the enemy of good.
What do you mean by “we shouldn’t let perfect be the enemy of good”? Why should I use a browser which is actively anti-user when there are better alternatives out there?
There aren’t better alternatives, and the ai shit is all easy to disable.
ai shit is all easy to disable
Users don’t have to disable it. Just give them a browser where they’re not enabled by default!
To my knowledge that literally only exists in the form of a Firefox fork like Librewolf. Which takes more effort to switch to than simply disabling a couple values in config.
Disabling it is one more thing to fingerprint me on.
It serves literally no purpose for any intelligent person.
There aren’t better alternatives
They are literally mentioned in the article:
Those unhappy have another option: use an AI‑free Firefox fork such as LibreWolf, Waterfox, or Zen Browser.
- https://manualdousuario.net/en/mozilla-firefox-window-ai/.
Well, the first two essentially are Firefox and the latter is very immature to the point that I doubt you could reliably use it. It’s in beta.
The good is other rendering engines currently in the works.
My two biggest issues with a fork are: a) timely updates, they take a bit longer than the main version, and b) trust issues, I don’t trust most forks.
Try Phoenix for Firefox https://github.com/celenityy/Phoenix
What I don’t get: Isn’t Vanadium Chromium under the hood?
The truth is that Chromium is really good. It has the best security and performance.
Vanadium takes that and makes changes to make it more secure and private.
I think the problem with Chromium isn’t so much that Blink or V8 is bad or anything, it’s that it’s entirely under the thumb of Google. We’re essentially being set up for another Internet Explorer scenario, only Google unlike Microsoft won’t just be sitting on their laurels. Google is an advertising company, their entire business model is the web. Google Search is the tool used to find things, and with Google Chrome being the go-to browser for a lot of people, Google essentially ends up in control of both how you access the web and what you access.
That sort of power is scary, which is why I personally avoid anything Chromium based as much as I am able to. Chromium itself is fantastic, but I don’t like what it represents.
That’s valid.
That’s also part of the reason I like Webkit. It’s in a nice spot between Firefox and Chromium when it comes to security and performance. And importantly, is not from an ad company and often passes on browser specs that would be harmful to privacy and security.
I forget what the site is called, but I saw one that nicely layed out different browser specs and gives the explanation why one of the engine developers decided against supporting or implementing it.
Gods I wish Epiphany/Gnome Web was better. The Kagi people are working on bringing Orion to Linux, which I believe will be using WebKit there as well.
It’s kind of funny that we don’t have a solid WebKit browser on Linux, since WebKit has its roots in the KDE Projects KHTML engine for Konqueror.
I guess that kind of ties in to my anger at these massive tech companies profiting off of FOSS but doing almost fuck-all to contribute. Google opening LLM generated bug reports in FFMPEG when all of the streaming media giants are propped up by this one project is just one example. There should be some kind of tax for this, I feel. They’re benefitting greatly, and provide nothing in return.
Google search hasnt been usable for over a year.
Yes. Chromium isn’t bad in itself though.
Wrong. You are both popularizing Google tech and decreasing web browser diversity when you use any chromium variety
Vandium is all about not standing out from the crowd. You use it to not make a statement and hide your activity within the majority of useragents. If you want to make a statement that’s great, but you should only do it when you’re ok being fingerprinted.
Who says I’m “making a statement” by using firefox? That’s not the goal at all.
I didn’t mean that in a negative way. All I meant was that using a non-chromium browser to help move the needle is a privacy tradeoff. I keep both vandium and ironfox installed and use them at different times for different things.
Google tech
Chromium is open-source. It doesn’t belong to Google or anyone else.
Are you serious? Chromium is very much mostly written by Google and the direction it takes in every way that matters is entirely controlled by Google.
This still doesn’t mean Google has some kind of ownership for it. Nobody stops you from forking it and taking it into a different direction.
It actually does. You’re still supporting a browser monoculture unless you change it so radically that it makes no sense to call it a fork anymore
Hear me out.
This could actually be cool:
-
If I could, say, mash in “get rid of the junk in this page” or “turn the page this color” or “navigate this form for me”
-
If it could block SEO and AI slop from search/pages, including images.
-
If I can pick my own API (including local) and sampling parameters
-
If it doesn’t preload any model in RAM.
…That’d be neat.
What I don’t want is a chatbot or summarizer or deep researcher because there are 7000 bajillion of those, and there is literally no advantage to FF baking it in like every other service on the planet.
And… Honestly, PCs are not ready for local LLMs. Not even the most hyoper optimized trellis quantization of Qwen3 30B is ‘good enough’ to be reliable for the average person, and it still takes too much CPU RAM.
Honestly, PCs are not ready for local LLMs
The auto-translation LLM runs locally and works fine. Not quite as good as deepl but perfectly competent. That’s the one “AI” feature which is largely uncontroversial because it’s actually useful, unobtrusive, and privacy-enhancing.
Local LLMs (and related transformer-based models) can work, they just need a narrow focus. Unfortunately they’re not getting much love because cloud chatbots can generate a lot of incoherent bullshit really quickly and that’s a party trick that’s got all the CEOs creaming their pants at the ungrounded fantasy of being just another trillion dollars away from AGI.
Yeah that’s really awesome.
…But it’s also something the anti-AI crowd would hate once they realize it’s an 'LLM" doing the translation, which is a large part of FF’s userbase. The well has been poisoned by said CEOs.
I don’t think that’s really fair. There are cranky contradictarians everywhere, but in my experience that feature has been well received even in the AI-skeptic tech circles that are well educated on the matter.
Besides, the technical “concerns” are only the tip of the iceberg. The reality is that people complaining about AI often fall back to those concerns because they can’t articulate how most AI fucking sucks to use. It’s an eldtritch version of clippy. It’s inhuman and creepy in an uncanny valley kind of way, half the time it doesn’t even fucking work right and even if it does it’s less efficient than having a competent person (usually me) do the work.
Auto translation or live transcription tools are narrowly-focused tools that just work, don’t get in the way, and don’t try to get me to talk to them like they are a person. Who cares whether it’s an LLM. What matters is that it’s a completely different vibe. It’s useful, out of my way when I don’t need it, and isn’t pretending to have a first name. That’s what I want from my computer. And I haven’t seen significant backlash to that sentiment even in very left-wing tech circles.
If I can pick my own API (including local) and sampling parameters
You can do this now:
- selfhost ollama.
- selfhost openai and point it to ollama
- enable local models in about:config
- select “local” instead of ChatGPT or w/e.
Hardest part is hosting openai because AFAIK it only ships as a docker image.
Open WebUI isn’t very ‘open’ and kinda problematic last I saw. Same with ollama; you should absolutely avoid either.
…And actually, why is open web ui even needed? For an embeddings model or something? All the browser should need is an openai compatible endpoint.
The firefox AI sidebar embeds an external open-webui. It doesn’t roll its own ui for chat. Everything with AI is done in the quickest laziest way.
What exactly isn’t very open about open-webui or ollama? Are there some binary blobs or weird copyright licensing? What alternatives are you suggesting?
https://old.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/
https://old.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/
Basically, they’re both using their popularity to push proprietary bits, which their devleopment is shifting to. They’re enshittifying.
In addition, ollama is just a demanding leech on llama.cpp that contributes nothing back, while hiding the connection to the underlying library at every opportunity. They do scummy things like.
-
Rename models for SEO, like “Deepseek R1” which is really the 7b distill.
-
It has really bad default settings (like a 2K default context limit, and default imatrix free quants) which give local LLM runners bad impressions of the whole ecosystem.
-
They mess with chat templates, and on top of that, create other bugs that don’t exist in base llama.cpp
-
Sometimes, they lag behind GGUF support.
-
And other times, they make thier own sloppy implementations for ‘day 1’ support of trending models. They often work poorly; the support’s just there for SEO. But this also leads to some public GGUFs not working with the underlying llama.cpp library, or working inexplicably bad, polluting the issue tracker of llama.cpp.
I could go on and on with examples of their drama, but needless to say most everyone in localllama hates them. The base llama.cpp maintainers hate them, and they’re nice devs.
You should use llama.cpp llama-server as an API endpoint. Or, alternatively the ik_llama.cpp fork, kobold.cpp, or croco.cpp. Or TabbyAPI as an ‘alternate’ GPU focused quantized runtime. Or SGLang if you just batch small models. Llamacpp-python, LMStudo; literally anything but ollama.
As for the UI, thats a muddier answer and totally depends what you use LLMs for. I use mikupad for its ‘raw’ notebook mode and logit displays, but there are many options. Llama.cpp has a pretty nice built in one now.
-
That would be awesome. Like a greasemonkey/advanced unlock for those of us who don’t know how to code. So many times I wanted to customise a website but I don’t know how or it’s not worth the effort.
But only of it was local, and specially on mobile, where I need the most, it will be impossible for years…
I mean, you can run small models on mobile now, but they’re mostly good as a cog in an automation pipeline, not at (say) interpreting english instructions on how to alter a webpage.
…Honestly, open weight model APIs for single-off calls like this is not a bad stopgap. It’s literally pennies, and power efficient.
You mean to use online LLM?
No. That’s what I don’t want. If it was a company I trusted I would, but good luck with that. Mozilla is not that company anymore, even if they had the resources to host their own.
But locally or in a server I trust? That would be awesome. AI is awesome, but not the people who runs it.
I mean, there are literally hundreds of API providers. I’d probably pick Cerebras, but you can take your pick from any jurisdiction and any privacy policy.
I guess you could rent an on-demand cloud instance yourself too, that spins down when you aren’t using it.
You know what would be really cool? If I could just ask AI to turn off the AI in my browser. Now that would be cool.
I can’t fucking believe I’m agreeing with a .world chud.
Well fortunately for you, I don’t know what that means.
Your server has not a monopoly on, but a majority of the worst shitlibs and other chuds. To the point I’m genuinely surprised by agreeing with someone there, and am worried that when i examine it closely youll be agreeing with me for some unthinkably horrible reason.
The problem is I fundamentally do not understand how Lemmy works, so I just picked what seemed obvious. Like why wouldn’t I want the world.
Also I thought from just reading sub-Lemmies? that .ml was the crap hole.
Also, I looked up Chud and that’s really mean.
I would say that while there are general rules of thumb, it’s generally good to never assume the intentions or beliefs of another user based solely on their home server. There are nice people all over, and there are also a lot of assholes all over.
By the way, as to your question mark, they are just called “Communities” on Lemmy typically, though I think some instances call them something different occasionally.
Youre on the shitlib chud server; shit happens.
-
Why not just distribute a separate build and call it “Firefox AI Edition” or something? Making this available in the base binary is a big mistake. At least doing so immediately and without testing the waters.
There is a Firefox Developer’s Edition so I don’t see why not? I personally don’t care to see them waste the time on AI features.
B- but- ✨yaiy window!!1!✨!1!!
Personally I don’t want AI anywhere.
I think Mozilla’s base is privacy focused individuals, a lot of them appreciating firefox’s opensource nature and the privacy hardened firefox forks. From a PR perspective, Firefox will gain users by adamantly going against AI tech.
Maybe their thought process is they’ll gain more users by adopting AI while knowing they’re still the most privacy focused of the major browsers. Where have I seen this mentality before?
Spoiler
The American Democrat party often believes it can get more votes by shifting conservative, believing the more progressive voters will stick pick them because they’re still more progressive than not.
I think ive lost hope at this point to see AI being actually useful in any application except chat gpt and code editors.
Companies are struggling how to use Ai in their products because it actually doesnt improve their product, but they really really want it to.
Studies show it’s not useful for that either.
It does have applications in doing and excusing warcrimes, and panopticon bullshit.
deleted by creator
It depends. If it’s just for the sake of plugging AI because it’s cool and trendy, fuck no.
If it’s to improve privacy, accessibility and minimize our dependency on big tech, then I think it’s a good idea.
A good example of AI in Firefox is the Translate feature (Project Bergamot). It works entirely locally, but relies on trained models to provide translation on-demand, without having Google, etc as the middle-man, and Mozilla has no idea what you translates, just which language model(s) you downloaded.
Another example is local alt-text generation for images, which also requires a trained model. Again, works entirely locally, and provide some accessibility to users with a vision impairment when an image doesn’t provide caption.
I considered using AI to summarize news articles that don’t seem worth the time to read in full (the attention industrial complex is really complicating my existence). But I turned it off and couldn’t find the button to turn it back on.
If you need to summarize the news, which is already a summary of an event containing the important points and nothing else, then AI is the wrong tool. A better journalist is what you actually need. The whole point of good journalism is that it already did that work for you.
That should be the point but there is barely good journalism left.
How does AI mis-summarizing the (allegedly bad) journalism improve it?
It SLAMS away all the fluff.
I have a real journalist, but this is more on the “did you know this was important” side. Like how it’s fine to rinse your mouth out after brushing your teeth, but if your water isn’t fluoridated then you probably shouldn’t (which I got from skimming the article for the actionable information).
you have to be REALLY careful when asking an LLM to summarize news otherwise it will hallucinate what it believes sounds logical and correct. you have to point it directly to the article, ensure that it reads it, and then summarize. and honestly at that point…you might as well read it yourself.
And this goes beyond just summarizing articles you NEED to provide an LLM a source for just about everything now. Even if you tell it to research online the solution to a problem many times now it’ll search for non-relevant links and utilize that for its solution because, again, to the LLM it makes the most sense when in reality it has nothing to do with your problem.
At this point it’s an absolute waste of time using any LLM because within the last few months all models have noticeably gotten worse. Claude.ai is an absolute waste of time as 8 times out of 10 all solutions are hallucinations and recently GPT5 has started “fluffing” solutions with non-relevant information or it info dumps things that have nothing to do with your original prompt.
AI is very much not good at summarising news accurately.
https://pivot-to-ai.com/2025/11/05/ai-gets-45-of-news-wrong-but-readers-still-trust-it/
ai can be good as long as you don’t let it think for you. i think the problem is taking resources from development and building into a browser would could just be a bookmark to a webpage.
why don’t they just instead put vivaldi’s web panel sidebar into firefox so you can just add chatgpt or whatever as a web panel. i think that would be infinitely more useful (and can be used for other sites other than ai assistants).
ai can be good as long as you don’t let it think for you
Unfortunately, there’s too many people already doing that, with not so clever results!
If it increases accessibility for those with additional requirements then great but we know that’s not even in its top 10 reasons for being implemented
Don’t they already have that last thing you mentioned?
Cannot wait for Servo & LadyBird to take off
I’m thoroughly sick of Mozilla’s shit.
























