Fabio Manganiello on Nostr: I’ve worked in AI in many forms for many years, and I’m quite positive that ...
I’ve worked in AI in many forms for many years, and I’m quite positive that eventually it will turn out to be a net positive force. It’s what happens in the transient until that new stability point that worries me.
What worries me is that it’ll honestly take AI a few more years of pure R&D to reach a true maturity point. Blockers on the way:<li><p>Scaling issues. We’ve realized that statistical neural-like models are great at solving many problems, but they are an efficiency and scalability nightmares. My kid only had to see a couple of cats before being able to tell the difference between a cat and a dishwasher, but it took some of the top-level models in use today a lot of energy-intensive training on millions of carefully labelled pictures. We can do better. We would honestly need a few more years of R&D to figure out how to solve this problem - add symbolic-like reasoning to the current models? invest more on small modular models rather than general-purpose huge models? All ideas are on the table.</p></li><li><p>Supervision issues. We don’t know why exactly a model may favour an outcome over another other than “it’s statistically more likely based on the data it’s been trained on”. And we don’t know the full implications of such a large-scale usage of models that we don’t even fully understand. We need a few more years of pure R&D with everyone in the field - politicians, psychologists, sociologists, philosophers, engineers etc. - just to get things right.</p></li>
Unfortunately, both things are unlikely to happen if everybody has incentives in squeezing as much milk as possible out of today’s cow for as long as possible.
Product/business managers in companies of all sizes keep repeating “we need to use AI in our products” because everybody up the chain has incentives in saying so. And it’s not even clear what problems exactly they want to solve. If everybody has incentives to keep the hype on, then funding and attention is unlikely to go to those who are trying to get things right.
Published at
2024-06-06 15:28:43Event JSON
{
"id": "2d181686f08ffa74e6af8fa79950f95d8e35fb2b9e802dbcae0eccbfc04cb22b",
"pubkey": "8179879e743ecc0b539b67420e7dc29a1f097751a00fa1c74d3cea319465223b",
"created_at": 1717687723,
"kind": 1,
"tags": [
[
"p",
"8179879e743ecc0b539b67420e7dc29a1f097751a00fa1c74d3cea319465223b"
],
[
"e",
"019542ad4c403e3ffd145895a75af61cbf0a39861cbbd805120d1127cf610553",
"",
"root"
],
[
"p",
"09d5d503eaa28cc8b94569f33f377f7a030c71aa995ae387f6fc7b3c974daf6d"
],
[
"e",
"7c213e434e556656050dbb73d2da66bdd221464de53b4d03f0d6ffa8e2c0e2cb",
"",
"reply"
],
[
"proxy",
"https://manganiello.social/objects/4002990a-d3a6-4cb2-9fd5-509f93466606",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://manganiello.social/objects/4002990a-d3a6-4cb2-9fd5-509f93466606",
"pink.momostr"
]
],
"content": "I’ve worked in AI in many forms for many years, and I’m quite positive that eventually it will turn out to be a net positive force. It’s what happens in the transient until that new stability point that worries me.\n\nWhat worries me is that it’ll honestly take AI a few more years of pure R\u0026D to reach a true maturity point. Blockers on the way:\u003cli\u003e\u003cp\u003eScaling issues. We’ve realized that statistical neural-like models are great at solving many problems, but they are an efficiency and scalability nightmares. My kid only had to see a couple of cats before being able to tell the difference between a cat and a dishwasher, but it took some of the top-level models in use today a lot of energy-intensive training on millions of carefully labelled pictures. We can do better. We would honestly need a few more years of R\u0026amp;D to figure out how to solve this problem - add symbolic-like reasoning to the current models? invest more on small modular models rather than general-purpose huge models? All ideas are on the table.\u003c/p\u003e\u003c/li\u003e\u003cli\u003e\u003cp\u003eSupervision issues. We don’t know why exactly a model may favour an outcome over another other than “it’s statistically more likely based on the data it’s been trained on”. And we don’t know the full implications of such a large-scale usage of models that we don’t even fully understand. We need a few more years of pure R\u0026amp;D with everyone in the field - politicians, psychologists, sociologists, philosophers, engineers etc. - just to get things right.\u003c/p\u003e\u003c/li\u003e\n\nUnfortunately, both things are unlikely to happen if everybody has incentives in squeezing as much milk as possible out of today’s cow for as long as possible.\n\nProduct/business managers in companies of all sizes keep repeating “we need to use AI in our products” because everybody up the chain has incentives in saying so. And it’s not even clear what problems exactly they want to solve. If everybody has incentives to keep the hype on, then funding and attention is unlikely to go to those who are trying to get things right.",
"sig": "9fefe7858efa0abfa62f815929372117ff44178f9815a06c2072bcf6ea919fa5217c9332b6ceff5af21ec16ca2209c0df823248cfefb453cfbb4f18d2bc16132"
}