Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Physics of the machine running a bunch of different programs
Power use, syscalls, opcodes, etc to draw a video game or run text editor; decompose all the specifics of a particular app into generic compute patterns
I am working on this approach anyway; git submoduled a bunch of open source like Quake 3, and am telling bot to build it, run it, reverse it, generalize all the observed system states into normalized compute schedules, and watch your resource use to not crash machine. Do; loop
Anything we want to do with computers is written, committed to code long ago. We been diddling our egos hyping brand logos not net new discovery. It's just electromagnetic geometry, sync of grid-like memory and grid like display.
Not the first to notice this I'm sure but it feels like there's an insane amount of pressure pushing capital towards anything with a hint of AI legitimacy. It's as if asset owners across the planet have come to a consensus that the only industry that will matter going forward is this one (fair enough I guess), but this intense systemic pressure squeezes insane amounts of money toward litearlly any AI shaped outlet that opens up. It's just starting to feel like "scared and desperate" money more than "smart money".
Is it not a case of many funders don't want to risk missing out on the next big thing? And a loss of a few billion now is better than the loss of many billions down the line and control of the future?
Of course the motivation makes sense on the surface. What I'm getting at is that the supply of capital vs the supply of potential "control of the future" plays feels incredibly imbalanced. Money seems to be so desperate to move into AI it's lost all prudence (the particular people and company mentioned in the OP nonwithstanding, maybe they do deserve 1B).
"not wanting to risk missing out" is essentially just FOMO right? "Smart" money has feels more like FOMO money these days. We literally have shoe companies savying they're going to pivot to AI and having their market cap increase in multiples as reward.
I don't think Silicon Valley has been smart money for a decade plus. Quantum Computing is becoming the exact same with academic and government funding. With a lot of cash being spent on long shots or no hopers.
AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)
Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Physics of the machine running a bunch of different programs
Power use, syscalls, opcodes, etc to draw a video game or run text editor; decompose all the specifics of a particular app into generic compute patterns
Google and Meta are leaning into decomposing workloads into kernel compute schedules: https://sched-ext.com/docs/OVERVIEW
I am working on this approach anyway; git submoduled a bunch of open source like Quake 3, and am telling bot to build it, run it, reverse it, generalize all the observed system states into normalized compute schedules, and watch your resource use to not crash machine. Do; loop
Anything we want to do with computers is written, committed to code long ago. We been diddling our egos hyping brand logos not net new discovery. It's just electromagnetic geometry, sync of grid-like memory and grid like display.
You'd probably have to embody it.
Not the first to notice this I'm sure but it feels like there's an insane amount of pressure pushing capital towards anything with a hint of AI legitimacy. It's as if asset owners across the planet have come to a consensus that the only industry that will matter going forward is this one (fair enough I guess), but this intense systemic pressure squeezes insane amounts of money toward litearlly any AI shaped outlet that opens up. It's just starting to feel like "scared and desperate" money more than "smart money".
> the only industry that will matter going forward is this one (fair enough I guess)
Housing, healthcare, and food production all spring to mind as industries that matter waaaay more than AI! (≧ᗜ≦)
My bet is on ultra greedy trying to find the cure for death. They need best models for that.
Not if all human labor becomes surplus to requirements.
Is it not a case of many funders don't want to risk missing out on the next big thing? And a loss of a few billion now is better than the loss of many billions down the line and control of the future?
Of course the motivation makes sense on the surface. What I'm getting at is that the supply of capital vs the supply of potential "control of the future" plays feels incredibly imbalanced. Money seems to be so desperate to move into AI it's lost all prudence (the particular people and company mentioned in the OP nonwithstanding, maybe they do deserve 1B).
"not wanting to risk missing out" is essentially just FOMO right? "Smart" money has feels more like FOMO money these days. We literally have shoe companies savying they're going to pivot to AI and having their market cap increase in multiples as reward.
I don't think Silicon Valley has been smart money for a decade plus. Quantum Computing is becoming the exact same with academic and government funding. With a lot of cash being spent on long shots or no hopers.
scam
AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)
(1) https://philippdubach.com/posts/the-most-expensive-assumptio...
"pre-money valuation" I don't know what that means but it makes me roll my eyes so hard it hurts
Post-Money = Pre-Money + Investment
So pre-money in this case is their valuation even before they've received any investment.