Most predictions about job displacement through large language models (LLMs) are wrong. This is because they don’t have good explanations of how LLMs and human intelligence differ from each other.
"It is hard for most people to fathom what harm regulations create. This holds especially for general purpose technologies like nuclear or LLMs. For example, without the destructive nuclear energy regulations, we could enjoy energy too cheap to meter, "
And without the child employment regulations, we could cheaply employ children to clean chimneys.
If you want to prove that all regulation is bad, you really need more than one example.
some more discussion here: https://www.reddit.com/r/slatestarcodex/comments/13w0bo8/why_job_displacement_predictions_are_wrong/?sort=new
"It is hard for most people to fathom what harm regulations create. This holds especially for general purpose technologies like nuclear or LLMs. For example, without the destructive nuclear energy regulations, we could enjoy energy too cheap to meter, "
And without the child employment regulations, we could cheaply employ children to clean chimneys.
If you want to prove that all regulation is bad, you really need more than one example.