AI is overhyped and has turned from a specific niche in technological study into this weird cultural phenomenon that people think is going to turn the world into some kind of dystopian hellscape.
Just like a programming language/framework, LLMs are a tool that are built on a pre-existing set of tools; it does not create original ideas, but shuffles around and amalgamates already existing ones.
The importance of AI isn't found in creativity nor originality. AI is important because of how it impacts speed.
It's been made abundantly clear that our global society, whether we like it or not, banks everything on how long our fingers stay on the pulse. Instant actions necessitate instant reactions. This is why machine learning is important.
Moreover, the main thing AI has done is that it's lowered the barrier of entry for many different activities and industries. This lowered barrier to entry has opened the flood gates for slop. Slop has always been around, but this time it's wearing a new outfit.
Despite its importance, it is not another printing press or cotton gin or steamboat. AI's true hype comes from propaganda machines that invest billions of dollars into another fabricated blitz scaling strategy. But here we are, letting ourselves get lied to by Big Tech yet again.
Here's how LLMs have been helping me:
I haven't really used too much AI for other digital mediums like audio or video or visual art, so I can't really say too much about them regarding how it is to use them.
However, I do think that those other technologies have the potential to improve the process of a creator who works in those kinds of mediums.
It sucks but it's been used mostly for generating 'uncanny valley' slop. It's made watching things in general a lot worse, so I just watch less stuff now. Phone bad.
Cognitive offloading sucks. I don't ever want to feel like a computer is doing my thinking for me. When I use LLMs, I can feel the addictive aspects of the software deeply.
One thing I've noticed with LLMs is that they always tend to sway users to keep prompting and essentially over-assisting. I'll always see them say things like "do you want me to do x?" Most of the time I don't, but I can't really make it stop doing it. Such is the life of using proprietary software.
Soon enough, the software development of LLMs will reach a point where self-hosting them is achievable and costs less than commercial solutions. It's already rather close, and my plan is to cancel my ChatGPT subscription once I can get a model about half as sophisticated as GPT-4o but on my own hardware.
AI enshittification is right around the corner, so this time I'm coming prepared.