Vlad's AI Rant: Cancel your subscriptions

I have been growing angrier and angrier at many things generative AI related in recent months. The growing RAM prices and crucial dropping the consumer market has put me over the edge.

I love hate generative AI
I love how useful LLMs and generative AI are but I hate the level to which users have been using it.

Personal use

I use chatGPT fairly liberally. There are legitimate usecases that LLMs are very useful for.

What I don’t do with AI:

100% vibe code - I hate AI code. It sucks. 80% of the time I end up re-writing it anyway.

Thinking by proxy - Too many users will ask AI to give them an answer before they even take a second to think about their own opinions and answers. This is leading to a psychosis epidemic.

Writing - I write my own words 99% of the time. When I see someone ask chatGPT to write something for them I immediately cringe and lose respect for that person.

What I do use AI for:

Autocomplete/Boilerplate: When I write code, I LOVE context aware autocomplete. Being able to write a function where the model can see what I intend to write and the variables im using and it just knows what I’m about to write, is SO NICE. I also write a fair amount of boilerplate code that AI is really good and making mostly what I want, and I can quickly fix the things it got wrong.

Idea challenger: I often tell AI to convince me im wrong about something. I recently got in a fight with ChatGPT about whether glass doors/tables/glass top stoves should exist (They shouldn’t).

Idea refiner: Sometimes when I have an idea, but it’s incomplete, I will tell ChatGPT my idea and see if it can enhance or make it better in some way. I’ll guide it to what I want (requirements, constraints, optionals, etc.) and refine an idea to be much better than I originally had. This will often result in changing my requirements, improving toolsets, making smarter decisions, etc.

Writing: I use AI for templating. Make markdown formatting more appealing to the eye. Adjusting my phrasing to be more professional while keeping my message the same (only sometimes).

Environmental impact

We do not have the electrical infrastructure necessary to run High powered compute at scale. AI requires a metric fuckload of energy to operate at scale. We are burning SO FUCKING MUCH COAL for AI. We need more nuclear infrastructure. Thankfully, this is getting a bit better, but it’s entirely controlled by the private sector and private equity making this a capitalist nightmare in the making.

Local economics are hurting people

Economics

This commercial could have been made by a team of 3D artists and graphic designers.
It was made with prompts and Sora.

Unemployment will only increase as employers try to penny pinch and save on human labor. They were doing it before AI, and AI will only make it more defensible to shareholders.

I have cancelled chatGPT

I used to have a ChatGPT subscription. Commercial AI is a bubble and is hurting all of us.
Commercial generative AI should be illegal.

How to cancel but still use AI:

  1. Install lmstudio: https://lmstudio.ai
    • lmstudio will detect your hardware for the most powerful models you can run
  2. Download some models
    • Aim for GGUF formatted models that can offload to VRAM if you have it
  3. Load them up and try some questions to feel them out. See how they perform, what outputs you get
  4. Download your chatGPT history and delete your account
28 Likes

Feeling similar in many ways. I’ve just finished watching the latest Gamers Nexus video.

WTF Just Happened? | The Corrupt Memory Industry & Micron - YouTube

Fortunately I had bought RAM previously in the year, but with the Crucial thing, now SSD’s are starting to creep up as well. I just ordered a Samsung NVME before those start increasing too. Prices are skyrocketing.

5 Likes

You’ll probably appreciate this YouTube channel, he covers much of the bs in the tech/AI sector.

3 Likes

people pay for AI? the funny thing is that realistically those $20/$200 subscriptions don’t cover the cost if you are an active user. so even if you’re paying as long as you’re using the service you’re costing them money.
but yeah shit sucks. the novelty has worn off

I feel so lucky that i was able get my new pc built at the start of October. Seeing the DDR5 64GB kit i bought for 100ish$ now going for over 1,000 just a few weeks later make me wonder, if this is going to be the last full Pc i ever buy.

2 Likes

While I agree that there are people who are easily misled, this was true coming from all sorts of institutions before chat bots gained prevalence. The people who are falling under some sort of psychosis by in large were already heading down that path. There definitely needs to be some regulation put in place to avoid it getting to the point that it can affect everyday people in that way.

I think most normal people use LLMs in the same way that you do. They don’t use it as a companion, or someone to tell them how to think, or to do all of their work for them. Some people try, but they will fail in the short term. They use it as a way to refine the work they’re already doing or the ideas that they already have. Of course, you’re going to see tons of reports that make it seem that isn’t the case, but that’s how most forms of media have operated in recent memory. Using AI for what it actually can provide today does not make the most enticing news article or Youtube video, so you won’t see it often in media.

When it comes to the environmental impact, I totally agree. These AI companies should be mandated to foot the bill on expanding the energy and water infrastructure to be allowed to operate. The cost should not fall on the taxpayers/government, and the drawbacks should not be felt by the people.

The promises of revolutionizing healthcare, productivity, standard of living, and more has not come to fruition and there’s nothing that seems to prove that it ever will. As far as we know, AI is and might always be just another helpful tool in the toolbox.

1 Like

Many of those companies will offer a subscription including a limited number of “credits” per month to spend on generating AI content, and then require you to pay more if you go beyond the credit limit. I think most of them have the costs covered, or are okay taking a loss to generate users before they pivot to a profitable strategy.

I have the displeasure of working with a lot of people in the tech sector. Competence is unfortunately uncommon. I would say about 20% of people in tech where I am are competent. I have had users argue with me telling me I’m wrong with AI. It’s frustrating when so many people (including a former CTO and currently one of my business parteners) have such blind trust in AI. I’m sure you’re right that a lot of users use AI the way your say, I have yet to see them. Even my wife, I’ve had to guide her on how to use LLMs effectively and be careful in blindly trusting it to solve your problems and answering your questions.

Your usage of AI seems to be closer in line with the way I think it should be used. It’s a tool that can be helpful here and there for specific things. It can be used to challenge thinking but not to substitute it entirely. If people rely on it too much they’ll start offloading the thinking to AI and blindly trusting it which seems to be what they want with how much they push for it. I can only imagine the type of harm that’ll do to the next generations. That’s some dystopian future I don’t look forward to.

The way I use AI is very minimal with some simple scripting and rough translations not even using up all the free credits.

Companies are shoving AI into everything just because they can (windows 11…why?). The amount of money and resources it’s using up is seemingly never ending and they just keep throwing more into it. It’s just going to keep growing until something gives. Maybe they are hoping for some breakthrough to offset the cost? There’s also data centers built out don’t even have enough power to turn them on so they are just unused and will be out of date in like 5 years time.

Sitting on a 2020 build and dreading the time when I need to jump

1 Like

Ideologically, this is a fine opinion to have. However, I don’t think cancelling your subscriptions is going to change anything. Everyone everywhere could cancel their subscriptions right now and it wouldn’t slow down this train even a little bit. It’s only getting worse and there is nothing you can do about it. The market can’t be killed by what it isn’t built upon. Consumer use is probably the smallest pillar when it comes to the AI bubble. This is not being funded on use. It’s being funded on enterprise use, speculation, and massive stores of capital that a lot of these huge tech companies amassed during the 2000s.

If this bubble is going to burst, it’s gonna have to happen on its own.

Doesn any1 know any good LLM for writing erotica? I’ve tried running a few locally and most just tell you it’s agains TOS.

Yeah. AI is very new. It can do some remarkable things. But I don’t think you can trust it. You have to check, double check and triple check everything it produces and fix it where necessary. And thats for everything AI is used for, not just scripts. I mean code, summaries, images, videos, text, everything.

AI is a tool; but not a panacea.

I don’t have any AI subscriptions. What little AI I use, I use selectively and carefully. And only then as a starting point. But I would never use AI to finish anything, or to use it’s output without going over everything and tweaking it myself.

its the same for everthing really isnt it

full of pros and cons,

i find chatgpt incredibly useful, i use it for assistance with technical things, to help my decision making when i have multiple choices, ive even used it for help mentally, its helped me challenge my overthinking negative thoughts and TRY to stop them and be more positive about myself, and ive had some nice results with its help, i dont rely on it for anything i just use it as an aid,

a small example, about 15 years ago in my teens i recorded a rap album, very personal to me, ive let nobody listen to it out of embarrassment really, negative thoughts scared of being judged negativily by my own family (very silly) so i discussed this with chatgpt and it made super good points and really got me thinking about how silly hiding it was, SO i decided to finally make putting it together into the album i always intended to create and i did it, i now have that 10 track album of all my own hard work and effort, i gave it to my brother and his fiance to test the waters and get there opinion and they LOVE it, i got nothing but positive feedback which boosted my overall morale and confidence a bunch, now i have it all set up to give it to my dad on christmas day as he always wanted to hear my songs, chatgpt helped me reach this goal, that for me is super positive, it even helped me create the artwork for my album and turned an old picture of me from my teenage years into a hip hop poster lol its awesome,

so yeah, pros and cons, wouldnt it be great if humans could make something that had zero cons lol

2 Likes

Agree with most of your viewpoints.

If I might add to the rant in the subjects of society, politics and economy:


Reality Distortion:

“The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
~Hannah Arendt

This has been accomplished with any new tech throughout the years. During the buildup to the 3rd Reich, Goebbels made great use of the revolutionary radio. Jump to a few years ago, there’s billionaire owned media conglomerates, and then came social media which became the primary means of propaganda and general spreading of mis- and disinformation. The extreme right-wing has made good use of that to gain power all over the world, while other political forces still view these channels and methods as beneath them (although it is slowly changing). And now AI is put on top of all that, making the emotionalized messaging even more powerful with sound and images.

We’ve all seen, how generated videos, images and audio are getting more and more realistic to the point where even the best at recognizing AI generated media have to have a second or third look. We’ve seen how people cling to “Alternative Facts™” . And we’ve seen how the White House has been making very blatant fakes of their political opponents. How powerful will propaganda get when it’s done more subtly? How much faster and more efficiently can misinformation spread? What effect will that have on the perceived realities of people? What does that mean for basic values (or so we love to claim) of democracy, freedom and the rule of law?


Blind Trust: As a part of symptoms of other societal ills, way too many people trust the LLMs way too blindly. Which can be especially harmful when they get their news, medical or legal advice from chat-bots without checking the “facts” underlying them. Moreover, the classical search engines are using LLMs to generate answers, which is being taken as gospel, which can be seen by declining internet traffic to primary sources or even just Wikipedia, which would have sources.

On a lighter note: LLM generated recipes usually turn out hilariously bad. That is, until people poison themselves with Bromide.


Readability: It might be a pet peeve of mine, but I find AI generated text to be a lot less engaging, readable or understandable. Which is why I tend to close AI generated pages immediately.


Mega-Bubble: The Financial Times acknowledged, that within a year 10 AI-startups, who have not made a single dime, have gained over $ 1 trillion in market value over the last year. Not few are comparing it to the Dot-Com bubble, but even worse. Two thirds of US GDP growth for the first half of 2025 have been AI spending. Again, without any returns. This is a huge bet that is threatening the economy world-wide. Especially now that whole industries are orienting themselves to the AI market and away from the consumer market (see Micron - the 3rd largest producer of memory and storage - the other day).


Entry-level job hiring is stalling in the US, with “AI” exacerbating the inflation-fear driven hiring crash that started a few years back. What do you think happens, when people out of school/university can’t find jobs? What does that mean when boomers are retiring in droves these years? (except for in congress of course)


Industry leaders are open about job replacement being the goal. And they are going further. Peter Thiel was not able to answer the question “should the human race survive?” with a clear yes. Why? Because he believes in transforming humans to some form of AI cyborgs. I’m not even kidding, look up transhumanism and Thiel and what companies he’s involved in. The leaders in AI are sociopathic billionaires. No, we shouldn’t subscribe to their services. Even if they are running on circular investments (one AI company investing non-existent money in the next, who invests in the next and so on) and the subscriptions aren’t critical for their survival. If anything, by using these tools, we help develop their products (self-hosting like you stated notwithstanding).

Since there is no democratic oversight or influence (see next point), they are exempt from many rules and they alone have the say on how to develop these systems, the process of replacing capitalism by more and more feudalistic-adjacent structures is coming along just nicely.


Regulation ban: What’s worse, US congress is in the process of banning states from regulating “AI” in any way. It failed twice until now, but they will keep trying. Alas, you can always rely on the White House to rule by decree in the end anyway.

And the Europeans are cucking themselves to the US in advance by also promising to not regulate or tax any US tech companies. And what are they getting for their promise? Tariff rates rising from around 2-2.5% to 15%. And they’re selling that as a win as well…

Point is that there is no democratic oversight over what is being done with these systems and how they can be used. It’s all in the hands of a handful of sociopathic, transhumanist billionaires.


Privacy Implications: Moreover Europeans are gutting their own privacy laws and replacing them step by step with authoritarian privacy nightmares like “chat control”. Of course based on faulty AI.

Instead of the promises of revolutionizing science and medicine, we see AI being adopted to make all-encompassing surveillance (e.g. Palantir) and murder (e.g. Microsoft and the IDF) more efficient. Wherever these few maniacs decide is best to invest in.


Intellectual Property: In a world that is really strict with IP laws and spends inconceivable amounts of money on litigation and measures to enforce these laws, generative “AI” is built on the blatant disregard of the same. And they are open about it, lobbying for exemptions by law, stiffing original creators out of their share, knowing that anything different wouldn’t be viable.


Electricity: Exactly when we must move away from fossil energy, which means that we not only have to change our current electricity consumption to green energy, but all of our primary fossil energy consumption as well (which will be mostly transformed into electricity), dramatic increases in total energy consumption is the exact wrong direction to head in.

2 Likes

Wizard vicuna 13B Uncensored
Its a little memory hungry but I have had it write decent erotica.

It also doesn’t shy away from taboo subjects.

I agree, I worked in the tech sector for about 8 years before pivoting entirely. Most of them are incompetent. That’s one major reason why I got out. The people who are the biggest pain in the ass will easily consume our thoughts on how people use tech.

Most people don’t even purposefully engage with AI on a daily or even weekly basis. It’s the ones who do that you need to watch out for, because they often rely on it. The other side of the coin is that AI is wrapped up in a bunch of tools, such as email spam filters, GPS apps, media recommendations, etc. - but that isn’t actively engaging with AI.

2 Likes

I wonder if there will be a future career in “AI Spotting” - contractors highly skilled in reason, logic, lingusitics, history, et. al. to check documents and sources for AI generated content, hired by companies, law firms (AI in legal world is a huge problem), universities, etc to remove or correct its failings.

I have used Google for my small business - domain, email, calendar, everything in one spot is convenient to outside B2B. The introduction of AI into the Workspace apps, has me shopping for a different solution - I don’t want to be training their LLM on an outside sales agents’ emails… that’s how I make myself replaceable by a machine. I realize it’s too late already, they’ve gone through my whole archive of 15 years by now - but I don’t need to keep feeding it.

It’s not the power-users or even the daily active users I’m worried about. there are always early adopters who get enthralled with new tech. It’s the casual, unaware, oblivious frankly “feed the machine” masses that worry me more. Sleepwalking us all to a dystopia.

1 Like

Agree 100%.
Companies using AI is infinitely larger than personal use though I fear. At my work they’re pushing for everyone to use AI as much as possible, wikth no regards to cost. They will spend 10s of millions with no return. All they want is for people to use AI as much as possible, just in case AI becomes more important in the future.

This is why its easy to tell its a bubble.

Normaly when you invest into something, you have a decent expectation of what you are going to get, it might not always end up in profit, but you do have an idea. With AI this is completely missing. They only invest based on ‘its going to improve in the future’.

More than half of AI investing companies have no target to work to. Companies like like apple can use it to integrate it like siri (google home/phones can benefit here as well), microsoft has a huge investment into coding, and has their own IDE, which means any AI code can severely help making that better (vibe coding remains a joke). Analytics companies can use AI to search for potentialy undiscovered patterns which they can improve upon.

But almost all other companies just want a better ‘search engine’ or ‘answering machine’. These are flawed since LLMs do not involve any thinking at all, they can at best cross reference the output and check if thats not garbage. But that relies on input, and the internet is currently already spoiled to such degree that most sources are unreliable. Too many news sites already generate header images through AI, which is some serious information pollution.


The 2nd problem is: user tracking, to make AIs work better, user context is generaly required. But its such a big privacy problem, that even if those companies would try it, governments will act on it. And before you even think: china doesnt have this problem. Such tracking changes user behaviour, so even then its still invalid data for any region that does not involve such tracking.


And lastly, there is the constant loop of just buying AI stuff to provide money to the other company, just in order to prevent them collapsing. Companies that show themselve in this cycle are extremely vulnerable. nvidea will have a significant financial problem if chatgpt collapses. And since they dont work with AI themselve, this will cause such an overflow of chips on the market, that for quite some time after, they wont have anything to sell. The huge investments on that, are considered part of what they ‘own’, but if that falls away in just a week. Bankrupsy does become a realistic problem. They have no protection against it. This is why they are deemed this high, its an internal ponzi scheme they are facing, they generate money that doesnt exist.

If the bubble collapses, i dont think we will completely lose AI, companies with a clear target and use case for AI already will survive it. And companies with little investments can generaly just continue. Sure a lot of people will lose their job, a huge financial crisis does happen. But the key companies that know what they are doing, can still just continue as there are plenty of people that just arent working with AI, except sometimes generating some code. Boohoo if chatgpt crashes and grok goes down, codepilot (no matter how shit it is) can still generate some simple functions and save time.


And another problem. Most of these AI tools arent AI. They are a huge bulk of data, with a lot of linked information in it. Whenever something is requested, it generates a random seed, and combines your tokens to make it generate something within this bulk of data.

There is no intelligence in it at all, its mathematics and randomness with a data set that can generate some meaning out of it.

Any intelligence these AI tools have, are all just preparsers, or intermediate checks to correct an AI if they go the wrong way. Most of these checks are still programmed in the old fashioned way. And while these checks were made by someone intelligent (note, sometimes its idiots). But this is the only part that can result in something that looks intelligent, but sometimes also equaly stupid.
Examples:

  • Google and their forced tokes of inclusive and black. Creating a black hitler. The problem was that the tokens were placed in after any check they performed. And therefor made it very easy to ask for something and then let google include the black token to make it generate something racist.
  • Sometime ago you could see AIs revert some context that they generated, this was because in the background there was another check going on to correct AI’s. This was removed as the problem info even if it showed up for a short time caused negative marketing.
  • AIs are designed to only take in information that has been verified some way. We all remember how microsofts AI went racist within a day. This was because it never checked any sources and had no monitoring on whether any information was worth to store.

These tweaks towards AIs are the only intelligent improvements they get, and its only just as good as the programmer who made those. Its limited.

1 Like