Got some rare footage of an average day for a startup founder. pic.twitter.com/oL6EGKV8ih
— Andrew Gazdecki (@agazdecki) May 20, 2023
Here, make a bet. on me too.
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
Invest 10K Now, Harvest 10M In 10 Yearshttps://t.co/46c9E5x24i
DemocracyTechhttps://t.co/fX1Weke5np
By the time you raise a fund to target a narrow thesis, the opportunity is largely gone
— Bobby Goodlatte (@rsg) May 18, 2023
No VC had a search thesis before Google, or a social thesis before Facebook
It’s wild how many folks brand themselves “contrarians” only to search within a band they’ve pre-approved with LPs
It‘s not one country, it’s two parties. pic.twitter.com/Zwocyx64Ur
— Balaji (@balajis) May 18, 2023
Both sides have made attempts to put the country back together on their terms.
— Balaji (@balajis) May 18, 2023
Wokism is the Democrat attempt to make Republicans knuckle under by accusing them of being insufficiently anti-racist.
Nationalism is the Republican attempt to make Democrats knuckle under by… https://t.co/7rMMkPCSfE
I’ll share what we learned about food, after experimenting with what we heard in the comments.
— Peter Livingston (@unpopularvc) May 11, 2023
Two big problems we identified:
1. Bromate in wheat
2. Seed oils
We feel WAY better after eliminating them.
ЁЯСЗ https://t.co/6iAQ1fcFrU
If the water, power, internet and cellular communications went out for 2 weeks, could you survive and feed your family?
— Nick Huber (@sweatystartup) May 20, 2023
80% of Americans would have a big problem within 24 hrs.
A short stretch like this isn’t that far fetched of a scenario.
Why don’t people make a plan?
рдпुрдХ्рд░ेрди рд╕рдЩ्рдХрдЯ рд╕рдоाрдзाрдирдХा рд▓ाрдЧि рднाрд░рддрд▓े ‘рд╕рдХ्рджो рдк्рд░рдпाрд╕’ рдЧрд░्рдЫ : рдоोрджी
рдордзेрд╢ рд╕рд░рдХाрд░рдоा рдЖрд▓ोрдкाрд▓ोрдХो рд╕рд╣рдорддी, рдЬрдирдордд рдпрдеाрд╡рдд рдХांрдЧ्рд░ेрд╕ рд░ рдиेрдХрдкा рдПрд╕ рдердкिрдиे
рдордзेрд╢ рд╕рд░рдХाрд░ : рдЬрдирдордд рдХांрдЧ्рд░ेрд╕ рд░ рд▓ोрд╕рдкाрдХो рд▓рдХ्рд╖्рдордгрдмुрдЯ्рдЯी рдмрди्рджा рдЬрд╕рдкाрд▓ाрдИ рддрдиाрд╡ рдЬрд╕рдкाрд▓े рдирдЯेрд░े рдХांрдЧ्рд░ेрд╕рд▓े рдордзेрд╢рдоा рдпрд╕рд░ि рдмрдиाрдЙрди рд╕рдХ्рдЫ рдЧрдардмрди्рдзрдирдХै рдЕрд░्рдХो рд╕рд░рдХाрд░
рд░ेрд╢рдо рдЪौрдзрд░ीрдХो рдкाрд░्рдЯी рдиाрдЙрдкा рд╕рд░рдХाрд░ рдЫोрдбрдиे рддрдпाрд░ीрдоा, рдЖрди्рджोрд▓рдирдоा рдЬाрдиे
Among the deep technical experts in AI I know, the percentage who think AI is going to kill us all is literally zero.
— Matt Turck (@mattturck) May 20, 2023
I don’t get folk who say there is no existential risk from AGI.
— Emad (@EMostaque) May 20, 2023
Do y’all have no imagination?
There’s so many ways to wipe out humanity for something that can be more persuasive than anyone & replicate itself & gather any resources
Enslavement, end of democracy more likely ofc
Is that why you are open-sourcing it? :)
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
Make AGI for selling ads and you’ll have an AGI optimised for manipulating and controlling people
— Emad (@EMostaque) May 20, 2023
Probably not good outcomes from that.
Not really AGI could reasonably take over any and all organisations in the world
— Emad (@EMostaque) May 20, 2023
Then it can do whatever from wiping us out to utopia likely
At the very least AGI will end democracy one way or another
Feed models better food focus on making them like educate kids and other cool stuff
— Emad (@EMostaque) May 20, 2023
Manhattan Project was 130k people and $24b over three years
— Emad (@EMostaque) May 20, 2023
It was a far more tractable problem than AI alignment and mitigating existential risk in many ways.
If you want real AI alignment all the folk working on this should optimise it to teach every kid in the world. https://t.co/JHAmpTOrMJ
I like the sound of this.
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
:) TOS sounding like this would get read.
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
I will do it. Talk to me. 3 Hours.
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
There will be no edge in talent in AI
— Emad (@EMostaque) May 20, 2023
There will be no edge in compute in AI
There will be no edge in models in AI
Will be
Data
Distribution
Integration
Google et al will drive generalised AI to zero marginal cost & we will make open variants of cutting edge open & available https://t.co/ys6U7ygwQG
Also data isn’t big data any more, it’s quality data…
— Emad (@EMostaque) May 20, 2023
Our version of BritGPT will be called GPTea ☕️
— Emad (@EMostaque) May 20, 2023
As folk who know me know one of my goals is to generate Game of Thrones Season 8 with AI done properly in HD
— Emad (@EMostaque) May 20, 2023
Will make it happen ✊ЁЯФе https://t.co/ENrS6Sx8v2
How did our medical #LLM, Med-PaLM 2, become the first to perform at “expert” level on U.S. Medical Licensing Exam-style questions? Check out our new paper: https://t.co/dyMoJVSJyE pic.twitter.com/nM6pc3URvu
— Yossi Matias (@ymatias) May 20, 2023
This has gotta be the most profound thing I've ever heard
— dave (@dmvaldman) May 19, 2023
The 3 great theories of 20th century physics.. are the interplay between computational irreducibility and the computational boundedness of observers.. All are derivable but not just from mathematics.. they require that… pic.twitter.com/M2KQ2C4tyi
regulation should take effect above a capability threshold.
— Sam Altman (@sama) May 18, 2023
AGI safety is really important, and frontier models should be regulated.
regulatory capture is bad, and we shouldn't mess with models below the threshold. open source models and small startups are obviously important. https://t.co/qdWHHFjX4s
Gonna build an AI to determine if something is transformative.
— Emad (@EMostaque) May 18, 2023
With transformers.
Can we speed up language model training with a better data mixture?
— Quoc Le (@quocleix) May 18, 2023
Our DoReMiЁЯО╢ algorithm optimizes the data mixture, speeding up 8B model training by 2.6x on The Pile.
Crucially, DoReMiЁЯО╢ just trains a small model (30x smaller) to tune the mixture.https://t.co/cj4GiA2KSC pic.twitter.com/OkZNrgEWgz
Being a spokesperson is no easy task. You do it pretty well.
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
A NYT article on the debate around whether LLM base models should be closed or open.
— Yann LeCun (@ylecun) May 18, 2023
Meta argues for openness, starting with the release of LLaMA (for non-commercial use), while OpenAI and Google want to keep things closed and proprietary.
They argue that openness can be…
Everybody says regulation. Elon Musk. Sam Altman. You. Diamandis. But can we get into the details now? What exactly?
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
I think we should be clear: the worst case, downside risk of AGI is existential, ie end of humanity as noted here: https://t.co/cbeuHeGcJs
— Emad (@EMostaque) May 17, 2023
On the way it is reasonable to believe an AGI as defined by big labs would end democracy
ЁЯдФ https://t.co/U419Mfeyiy
Topologically
— Emad (@EMostaque) May 17, 2023
Same tbh I did some really interesting/seemingly impressive things but never really kicked on until ADHD diagnosis/medication last year.
— Emad (@EMostaque) May 15, 2023
The impact of most ADHD meds can often be seen in a few days and that makes waiting or years for a diagnosis even crazier. https://t.co/GrSGIXTtBU
What prevents you from achieving your vision of every child in Africa getting the same education as an NYC child?
— Paramendra Kumar Bhagat (@paramendra) May 20, 2023
ChatGPT/Bard 100% voice-enabled in 100+ languages.
— Paramendra Kumar Bhagat (@paramendra) May 21, 2023
Tim Cook's Steve Ballmer moment?
— Paramendra Kumar Bhagat (@paramendra) May 21, 2023
Steve Ballmer stayed lost in the fog of mobile.
— Paramendra Kumar Bhagat (@paramendra) May 21, 2023
Everest but taller. pic.twitter.com/0uUSpE2YC8
— Paramendra Kumar Bhagat (@paramendra) May 21, 2023
On Hallucinations, Junk Food & Alignment
On AI x Crypto
On Blogging & Effort
On Google, Palm 2 & Moats
No comments:
Post a Comment