Super HN

New Show
   Thoughts on Thinking (dcurt.is)
Among the many ways that AI causes me existential angst, you've reminded me of another one. That is, the fact that AI pushes you towards the most average thoughts. It makes sense, given the technology. This scares me because creative thought happens at the very edge. When you get stuck on a problem, like you mentioned, you're on the cusp of something novel that will at the very least grow you as a person. The temptation to use AI could rob you of the novelty in favor what has already been done.
This is only a problem if one is writing/thinking on things which have already been written about without creating a new/novel approach.

An AI is _not_ going to get awarded a PhD, since by definition, such are earned by extending the boundaries of human knowledge:

https://matt.might.net/articles/phd-school-in-pictures/

So rather than accept that an LLM has been trained on whatever it is you wish to write, write something which it will need to be trained on.

I've been finding a lot of fulfillment in using AI to assist with things that are (for now) outside of the scope of one-shot AI. For example, when working on projects that require physical assembly or hands-on work, AI feels more like a superpower than a crutch, and it enables me to tackle projects that I wouldn't have touched otherwise. In my case, the was applied to physical building, electronics, and multimedia projects that rely on simple code that are outside of my domain of expertise.

The core takeaway for me is that if you have the desire to stretch your scope as wide as possible, you can get things done in a fun way with reduced friction, and still feel like your physical being is what made the project happen. Often this means doing something that is either multidisciplinary or outside of the scope of just being behind a computer screen, which isn't everyone's desire and that's okay, too.

Completely agree.

From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.

I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.

Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.

Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.

I just wrote a paper a few days ago arguing that "manual thinking" is going to become a rare and valuable skill in the future. When you look around you, everyone is finding ways to be better using AI, and they're all finding amazing successes – but we're also unsure about the downsides. I hedge that my advantage in ten years will be that I chose not to do what everyone else did. I might regret it, we will see.
I've noticed something like this as well. A suggestion is to write/build for no one but yourself. Really no one but yourself.

Some of my best writing came during the time that I didn't try to publicize the content. I didn't even put my name on it. But doing that and staying interested enough to spend the hours to think and write and build takes a strange discipline. Easy for me to say as I don't know that I've had it myself.

Another way to think about it: Does AI turn you into Garry Kasparov (who kept playing chess as AI beat him) or Lee Sedol (who, at least for now, has retired from Go)?

If there's no way through this time, I'll just have to occasionally smooth out the crinkled digital copies of my past thoughts and sigh wistfully. But I don't think it's the end.

I keep going back and forth on this feeling, but lately I find myself thinking "F it, I'm going to do what I'm going to do that interests me".

Today I'm working on doing the unthinkable in an AI-world: putting together a video course that teaches developers how to use Phlex components in Rails projects and selling it for a few hundred bucks.

One way of thinking about AI is that it puts so much new information in front of people that they're going to need help from people known to have experience to navigate it all and curate it. Maybe that will become more valuable?

Who knows. That's the worst part at this moment in time—nobody really knows the depths or limits of it all. We'll see breakthroughs in some areas, and others not.

Have not resonated with an article for a long time until this one.
Wow, one day after this post: https://blog.ayjay.org/two-quotations-on-the-brief-dream-of-...