Still the question on the poll nagged at me, it was framed in a way that invited you to chose one over the other but was far from a complete question. Trust, as in what? Trust them to grasp the concept? Be talented enough to build one? Trust them not to create a monster that will go back in time and kill Sarah Conor? What is the trust we are establishing here? If we are talking general trust with programing an AI without ANY bias, then I would honestly say neither is deserving of trust.
We have already talked (in very simplistic terms) about how easy it is to inject bias into an AI, so there is no reason to believe that Elon would be better equipped to deal with this than Gates. I do think that Elon might proceed with a more caution and work to prevent the injection of bias, where Gates seems to have his own agenda with many things (not that I claim to know what that is). Musk’s championing of free speech without the “protection from harm” blanket would seem to add weight to CV when it comes to building an AI model.
So, we have one and half points for Elon and none for Bill at this stage, still I can clearly not choose the wine in front of me. You see Elon also has come out in opposition to how the political left has operated recently. Would this opposition cause him to create a politically right leaning AI? It is certainly possible as others have tried to create censorship free social media platforms have often ended up very right leaning as opposed to a politically neutral environment. That is not the fault of the people that created the platforms though. It is often the fault of the incumbents. They have given the appearance of political bias favoring the left (the Twitter Files seemed to confirm this) which made even those on the moderate side of the right flee to other platforms. The left aligned people stay on the incumbent platforms as they are not feeling any pressure or like they are being censored. This environment has created a social media divide instead of simple competition.
The somewhat rambling paragraph above does have a point. Since we know that new social media platforms tend to end up right leaning due to membership, would we also see any new “free speech” aligned AI also used by those that are no longer comfortable using ones they consider left leaning. Would what those Ais learn then become more and more aligned with the political Right? Can we trust anyone to ensure that any bias added left, or right is unlearned by the AI model so that it is truly neutral? I am not sure about that and once again, I can clearly not choose the wine in front of you.
Trust is a very dangerous thing. It is something that should not just be given, it needs to be earned. I do not care if it is “science”, the government, Bill Gates, Elon Musk, or anything really. We all need to build up a tolerance for the “internet” as a whole so we can choose either wine glass and be safe. (See I was going somewhere with "The Princess Bride" references). The reality of social media is that it has become a bad place when it could have and should have been a great tool to connect people and share knowledge (not just complain about movies and share pornography). The correlation between social media and AI are the same, they are both technologies that are going to shaped by their users, and by potential political bias (intentional or unintentional). The correlation deepens as we see calls for government regulation of AI and the leading players opening up their systems to governmental review to help shape that regulation/legislation in many countries. Trusting any one person or group of people (including the government) to make the right choices is not an easy thing for me and it should not be for anyone.