>I’m not sure why any self respecting researcher works at a closed lab, this isn’t how impactful science ever happens
True, but a lot of times with some sciences it's better to keep a low profile and try not to have any impact :|
More progress can often be made without distraction, and future deployment better guided by a more mature outlook.
Well, you could say that any org is nothing without its people.
From Hotz though he does present an uncommon perspective since he is well-known for the proven ability to accomplish remarkable things single-handedly. One who doesn't actually require an org of his own to be OK. That close to the bare metal though, the leverage when leading an org can be hard to match. But that's besides the point.
Now he doesn't really point to the OpenAI org itself when bringing up his idea of the people that can make it or break it to a pretty good extent.
Looks like he means the much greater number of people outside OpenAI who would be embracing it more cheerfully if things were actually more open, and there was some kind of clear path to spread the wealth, wherever it comes from. While addressing the shortcomings as originally intended before they can get out-of-hand. If successful plans can truly be made which increase prosperity at the rate that investments are taking place, then there should be a highly predictable need for fewer charitable-type efforts such as UBI. And it should be visible in some way as an indicator of worthwhile progress.
It's only logical, the only reason people need any kind of handout who didn't need it before, is because they became poorer not more financially secure.
In an ideal world it's quite possible that perfect AGI could help make it more ideal.
OTOH in a less-than-ideal world, a far-from-perfect AGI could strongly leverage things toward becoming more non-ideal.
Especially if people settle-for/enthusiastically-implement some AI that is far-from-perfect and deploy it in situations where only perfect AGI would do.
Whether intentionally or not, some who are in decision-making positions may not even be able to tell the difference :\
Starting out with a non-ideal world is inherently table-stakes so you'll always have that. The world at large for this equation is actually not much of a variable at all, in any condition it's a huge constant of extreme proportion.
In so many ways it looks like the real variable in question becomes "is it perfect AGI" or "is it not?"
And that doesn't even address the emotional aspect like who's in charge and what are they going to do with it regardless of how good it gets before taking over.
If previous examples are any indication, it's probably good to be careful what it is that you allow to eat the world. If it doesn't get chewed up and spit out right away, it may just turn out to emerge as a whole lot of shit in the end :(
Hotz is no dummy and neither is sama.
>I’m not sure why any self respecting researcher works at a closed lab, this isn’t how impactful science ever happens
True, but a lot of times with some sciences it's better to keep a low profile and try not to have any impact :|
More progress can often be made without distraction, and future deployment better guided by a more mature outlook.
Well, you could say that any org is nothing without its people.
From Hotz though he does present an uncommon perspective since he is well-known for the proven ability to accomplish remarkable things single-handedly. One who doesn't actually require an org of his own to be OK. That close to the bare metal though, the leverage when leading an org can be hard to match. But that's besides the point.
Now he doesn't really point to the OpenAI org itself when bringing up his idea of the people that can make it or break it to a pretty good extent.
Looks like he means the much greater number of people outside OpenAI who would be embracing it more cheerfully if things were actually more open, and there was some kind of clear path to spread the wealth, wherever it comes from. While addressing the shortcomings as originally intended before they can get out-of-hand. If successful plans can truly be made which increase prosperity at the rate that investments are taking place, then there should be a highly predictable need for fewer charitable-type efforts such as UBI. And it should be visible in some way as an indicator of worthwhile progress.
It's only logical, the only reason people need any kind of handout who didn't need it before, is because they became poorer not more financially secure.
In an ideal world it's quite possible that perfect AGI could help make it more ideal.
OTOH in a less-than-ideal world, a far-from-perfect AGI could strongly leverage things toward becoming more non-ideal.
Especially if people settle-for/enthusiastically-implement some AI that is far-from-perfect and deploy it in situations where only perfect AGI would do.
Whether intentionally or not, some who are in decision-making positions may not even be able to tell the difference :\
Starting out with a non-ideal world is inherently table-stakes so you'll always have that. The world at large for this equation is actually not much of a variable at all, in any condition it's a huge constant of extreme proportion.
In so many ways it looks like the real variable in question becomes "is it perfect AGI" or "is it not?"
And that doesn't even address the emotional aspect like who's in charge and what are they going to do with it regardless of how good it gets before taking over.
If previous examples are any indication, it's probably good to be careful what it is that you allow to eat the world. If it doesn't get chewed up and spit out right away, it may just turn out to emerge as a whole lot of shit in the end :(