At yesterday's US House subcommittee hearing on risks of AI, there was a lot of enthusiasm for privacy enhancing ML technologies. That seems like a valuable direction, but it alone isn't enough. A company training privacy enhanced ML systems over your data ... still has your data, and we don't have to accept that.
Apropos:
https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/