Discussion about this post

User's avatar
mani malagón's avatar

Yes, there are dangers associated with blindly sharing neural net transformation settings, but the imminent threat is the immersive mis•information shaping human neural nets.

—G•O•O•G•L•E - Granting Oxidized Old Genuflections Licensed Exceptions

Google privileges the same medical establishment that has overseen declining population health metrics [chronic iatrogenic neglect]. This is an “expertise” paradox: —how can institutions that have presided over 90% of Americans becoming metabolically impaired be considered the gold standard for health information?

This contradiction lies at the heart of why Google's algorithm changes are problematic censorship rather than “legitimate quality control”.

Google's Health Information Control: A Critical Analysis, is.gd/YVAI7K

Expand full comment
JerryB's avatar

Share weights, get AIDS. AI Derangement Syndrome.

I work with AI geeks. For 30 years I've been waiting for them to describe the feature set that their CNN (convolutional neural net) clues in on. That might allow a simpler lightweight algorithm that wouldn't require a boatload of training data, even generating fake training data because there's not enough real data.

For non-geeks info, there's $$ for making AI that creates fake data. And I'm not just talking about X and MSNBC.

Expand full comment
40 more comments...

No posts