katie on Nostr: Also - think about security bugs that are reported. It knows about them because they ...
Also - think about security bugs that are reported. It knows about them because they get reported on centralized platforms. Security pen testers find them. Right now, it’s simply a language model… it has no ability to execute pen testing. It relies on humans to produce content (release notes, open source code, quality reports or GitHub threads) to learn. It’s still very much piggy backing off of human generated data. And not to say models won’t generate that data in the future… but it will be a series of models with different tasks that communicate with each other.
Published at
2023-04-18 23:49:30Event JSON
{
"id": "abe1e09aa4a91c0512da16e77145644ef5a879565d071273dccbf9b61ccdd572",
"pubkey": "07eced8b63b883cedbd8520bdb3303bf9c2b37c2c7921ca5c59f64e0f79ad2a6",
"created_at": 1681861770,
"kind": 1,
"tags": [
[
"e",
"494bc5c6d8516f1bb712b0dcbe5a43a11724ff0f4ce44ba45fa87bd33ed8192d"
],
[
"e",
"272c490911ef6e8a02bc64eb9510103a58ad3f66e5d316119e93622bd7415e42"
],
[
"p",
"1bc70a0148b3f316da33fe3c89f23e3e71ac4ff998027ec712b905cd24f6a411"
],
[
"p",
"9baed03137d214b3e833059a93eb71cf4e5c6b3225ff7cd1057595f606088434"
]
],
"content": "Also - think about security bugs that are reported. It knows about them because they get reported on centralized platforms. Security pen testers find them. Right now, it’s simply a language model… it has no ability to execute pen testing. It relies on humans to produce content (release notes, open source code, quality reports or GitHub threads) to learn. It’s still very much piggy backing off of human generated data. And not to say models won’t generate that data in the future… but it will be a series of models with different tasks that communicate with each other. ",
"sig": "63ac96b78f1544c64974ef2dfa6ed04587f98964738e32b15518a28243efef8d89659b7cd5aab5a60c3e77157993f62bfb4d070ef5f83b0c5a4625f377d605b2"
}