Most predictions about job displacement through large language models (LLMs) are wrong. This is because they don’t have good explanations of how LLMs and human intelligence differ from each other.
"It is hard for most people to fathom what harm regulations create. This holds especially for general purpose technologies like nuclear or LLMs. For example, without the destructive nuclear energy regulations, we could enjoy energy too cheap to meter, "
And without the child employment regulations, we could cheaply employ children to clean chimneys.
If you want to prove that all regulation is bad, you really need more than one example.
Child labor regulations are a good example, actually. They are similarly destructive because they are way too broad. They made sense during a time when most labor was physical.
But children working as SWE's, lawyers, or doctors (assistance) would be way better than our scholarly, long, and ineffective education system.
some more discussion here: https://www.reddit.com/r/slatestarcodex/comments/13w0bo8/why_job_displacement_predictions_are_wrong/?sort=new
"It is hard for most people to fathom what harm regulations create. This holds especially for general purpose technologies like nuclear or LLMs. For example, without the destructive nuclear energy regulations, we could enjoy energy too cheap to meter, "
And without the child employment regulations, we could cheaply employ children to clean chimneys.
If you want to prove that all regulation is bad, you really need more than one example.
Thanks I'll expand on my argument here.
Child labor regulations are a good example, actually. They are similarly destructive because they are way too broad. They made sense during a time when most labor was physical.
But children working as SWE's, lawyers, or doctors (assistance) would be way better than our scholarly, long, and ineffective education system.