It sounds great, I’ve been browsing this website for like two months,but I found out that many users are willing to share a lot of different topics and opinions, I think many things just need a little motivation, Moreover, this is actually your accumulation as a teen, I think this discussion aims to explore the AI to existentialism of an individual, As OP said, words can represent a person’s thoughts and existence, which is why OP hates AI writing. So, if you hadn’t discussed this with AI, would you still want to make an album? or would you feel that enjoying the process is enough like before? and whether the album is completed or not.
I just felt shame that “AI” has become such an umbrella term that is no less bloated than “metaverse” of the 2020.
AI in most cases is Black Box Software. Nobody realy knows what it does, but as black box sounds negative. Thats not what they want to call it to the public. The names of things it relies on or does are also usualy suboptimal: machine learning, large language model, denoising.
Marketing obviously wants to call it AI, Because intelligence sounds good. And thats what counts.
Its not the only branch where this happens.
For cars they also talk about things like autopilot, instead of just ‘lane assist’. Because auto sounds more automatic, and is more convenient.
Or in the food branch, the number of meals/drinks called healthy even though they consist of a lot of sugar, are highly processed etc. Yes, a ‘healthy’ fruit drink thats 18% sugar, 80% water, and 2% flavoring. While the fruit on its own would have been less than 10%.
But on that, AI never realy was taken strictly, in games the computers were nearly always called AI, except these were usualy extremely dumb, very hardcoded algorithms. So i cant blame them either for doing this with the current ‘AI’ hype.
It’s relatively easy to reach something that is 80% done with relatively little work. However, now we are there and need to fix the last 20% and that’s usually 80% of the cost. Compare with Elon Musk’s statement in 2015 where he claimed self driving car problem was solved and will be available in just a few years. That was over a decade ago and the state of self driving cars hasn’t really changed much since then. It’s the last 20% that needs to be solved and I guess we still have 15% left of that. You can probably compare this with AI generated funscripts. Many think they are decent or good enough compared since the alternative is not having a script at all. However, they still need to pass the last 20% and that’s the tough part.
Another question is how companies are going to earn money on AI because most are in the red. The investements are huge in a few companies. Microsoft presented a lot of new AI agent development tools during their developer conference during Microsoft Build early last year. Since then they lowered their sales expectations twice (50% in total) due to low sales.
To me, AI resembles a lot of the dot com boom 25 years ago. When the AI-bubble inflates the real value will remain. There is value in AI, but it’s not what we are seeing right now. And above all, LLMs are not a silver bullet that solve all problems. It’s just a tool to process text (mostly) which gives the impression of “intelligence” or “understanding”.
I think its more extreme.
The first 80% was when the internet was clean, most data was user generated and bots were easily recognised. The remaining 20% is however highly contaminated data. This takes exponentialy much time to process (you cant cross reference the data as bots can spam this on many locations, and with AI alter it enough to look distinct). This 20% will most likely be a yearly cost.
And it amplifies, AIs become better at looking distinct and trustworthy, and will cause more content to be based on that AI generated content (and is therefor also spoiled, even if it was humanly made). This is a vicious cycle thats impossible to act against.
As a result, AIs wont improve, they will be able to detect plenty of false information, but generate and fall for new information just as easily. Becoming a very expensive maintenance dump.
And the moment they become perfect, profit making is also instantly ended, as there is no improvement to make, and you can be sure a free alternative will be made soon after (even if illegal, some countries just wont care).
While we are beyond the 80% progress now. Cost wise i think we havent even seen the 5% part yet. Even if now the bubble would collapse, it might stop the funding to development here, but at some point someone will figure out a next step (note, a completely new AI implementation, and most likely even quantum based), and cause huge developments to take place.
Creating a perfect AI is a paradox, since if its perfect, its no longer AI, its just I. This is why its impossible for any company to do this.
I agree, I just tried to generalize it a bit ![]()
I’ve seen some number that around 50% is already AI generated on LinkedIn these days. I don’t know if it means that it’s completely AI generated or if it’s AI generated from a draft text. Regardless, if AI is trained on AI generated content then it will probably be similar to inbreeding among animals.
Seeing how many of my colleagues cannot write an email beyond a 6th grade level is depressing. These are key accounts managers, vice presidents of global sales, founders and owners… BDR managers, independent sales reps, marketing directors…
I can spot each and every time one of them uses ChatGPT or another LLM in order to write email and communicate. I started noticing how many suddenly became more coherent when Grammarly popped up on the scene - they also sounded more “santized”. Now it’s word salad, or the kind of hyped-up-word-count-padding we used to play around with in the second year of an English degree.
I wonder what things will look like as the Ouroboros keeps eating itself.
LinkedIn is probably the most AI bloated social media platform out there. The amount of chatGPT generated, emoji riddled posts is insane.
There are a few things that AIs arent good at. This is because AIs try to target the average language as a result. This can cause it to purge some information from texts and make it more bland.
- Any read order thats inconvenient to read. AIs are very good at holding context within a text block, so it wont suddely add in extra details (this text is an example of what i mean) that just feel out of place. It would generate it somewhere that sounds more fluent and doesnt use brackets.
- note, acros multiple blocks it can sometimes lose context, but that is if an output token was generated only for that specific block, although, usualy this token gets preserved in later text (if you ever wonder why an AI sometimes self corrects or contradicts itself, this is where it faced an error)
- AI’s cannot convert tokens to something else. So this ‘fuel based vehicles’ part is highly unlikely to be generated from the word car or driving. It must mention fuel then in its prompt. (also, for linked in they would have used combustion based). Car might be translated to soimething like road vehicle at best, as that still contains the general same set.
- it can create new tokens, or decide to purge one entirely (you can often even request this - although, this fails regularly), but conversions dont happen.
- As an example, while electric and car are both tokens on its own, that exist as combination. AIs generaly have a poor reference point here, as it relies on statistics. And let mercedes, audi, ford also be tokens at a similar level. Its very complex for an AI to find anything here, so its most likely going to stick to near exactly the same tokens, or a more broad one (mercedes → car → vehicle is easy, but from vehicle it cant guess mercedes)
- For reviews or introductions, AIs are very eager to tell about your history, it most often puts this at the start as it sets the status and makes the following text stronger. Mentions at the end make its weight of the connection weaker. In this example it reads almost at the same level as ‘i googled something’ and ‘i used windows’.
- the reason is that tokens are just tokens, the order is rarely truly relevant when it generates a text. while it is context aware in for example progamming, in this story it can be positioned anywhere without changing anything.
- this is the same reason why chatgpt wants to make a function of everything, as at that point it can use that function as token in its output, rather than having to cross reference.
- Text usualy have average lenghts for certain platforms, AIs will try to have their text match that, as again, any deviation means the result is suboptimal in its algorithm.
- AIs want to answer. Anything negative is nearly always followed by something positive to still make it sound good. ‘You cant use X, but you can use Y’ is a comon answer for ‘can i use X?’. i was sooo tempted to mention bluesky here, but in this example i wanted to just use XYZ as placeholders.
Linked in is a special case, since its a site that basicly wants people to make the most bullshit description you can think of, which AIs are excellent at generating. This is also why linked in descriptions are usual valued as useless.
Lets compare a few:
I have years of experience creating an operating system
vs
I work as a programmer at a company that excels in software that is ment to optimize the performance of its users. After years of experience working on operating systems i have become an expert, and made my own operating system
vs
I made linux
While the first one is correct, it doesnt read as impressive as the 2nd one at first sight, even though the information is still just the same. Yet, if you can write the 3rd one. This is significantly more impressive than the 2nd one. And you dont need the extra context anymore, in that case if you would have done that, you would devalue the description.
AIs are good at 2 and 3, but this is why 1 is generaly valued more. It says exactly what is needed, and shows that you care about the exact thing that is needed: efficiency. The 3rd example is just luxury if you can. Any good recruiter in a good company will easily pierce through bullshit.
For example, i could have said ‘Im an expert at making a plugin for vibration scripts’, or just ‘I made MakeVibrationsExt’. But in no way is there enough value to extend it to ‘I am an expert at converting a linear script using an optimized algorithm in a way that makes it turns it into a vibration script’. Because that very statement would instantly break my ‘optimize’ portion, as the text just isnt optimized.
Thats why linkedin is a joke. No competent developer uses this, because they simply dont need to.
(the worst part is, im always making huge texts… im worried what would happen if i would start pushing that through an AI… the garbage it would come up with…)
Thats maybe why the bubble wont burst. Imagine what the real cost would be to licence all IP and not steal it.
I mostly agree, the environment and effects on the consumer PC market being big reasons
.
I’ve always had some interest in coding and software, but I only recently tried vibe coding. I actually like it for what it is. I don’t think it should straight up be used heavily in professional work, or by people who really should know better of its shortcomings, or as the foundation for open-source projects where you expect people to collaborate or fork a repo.
For someone like me though, it’s been really convenient. The best way to put it is that it has given me more control as a layperson when messing around with simple existing tools and apps. A lot of times I’m using something and think “I wish this could also do X,” or the docs aren’t great so I don’t fully understand what it can do .
The code I end up with is probably a patchy, bloated mess but that’s fine. I’m not expecting anyone else to read it or use it. It also opened me up to the existence of cool libraries and research/projects (or that if I want to learn what a repo does to actually look through the repo itself). I’m more interested learning a programming language now that i got a taste of why people find it rewarding. There’s only so much you get away with blindly asking ai to explain what its doing and why to the point it becomes a back and forth. However it’s nice being able to express an idea and see it be implemented in a way that actually functions even if behind the scenes its like this.
“This is your brain, and this is your brain on AI”
RAM prices are the least of our worries, prepare for Idiocracy…
Better pay attention to a healthy diet and exercise. Your future doctor is using ChatGPT to get through med-school right now.
Edit: And as we can see from the video you’ve posted: Their lecturer might be doing the same.

