The really nice thing about this proposal is that at least now we can all stop anthropomorphizing Larry Ellison, and give Oracle the properly robot-identifying CEO it deserves.
This is an interesting and thoughtful article I think, but worth evaluating in the context of the service ("cognitive security") its author is trying to sell.
That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).
A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.
There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.
I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.
Considering things like Palantir, and the doge effort running through Musk, it seems inconceivable that this is not already the case.
I think I'm more curious about the possibility of using a special government LLM to implement direct democracy in a way that was previously impossible: collecting the preferences of 100M citizens, and synthesizing them into policy suggestions in a coherent way. I'm not necessarily optimistic about the idea, but it's a nice dream.
Thanks for the comment. Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.
Indirectly, this is kind of what I was trying to get at in this weekend project https://github.com/stewhsource/GovernmentGPT using the British commons debate history as a starting point to capture divergent views from political affiliation, region and role. Changes over time would be super interesting - but I never had time to dig into that. Tldr; it worked surprisingly well and I know a few students have picked it up to continue on this theme in their research projects
Let’s assume we live in a hypothetical sane society, and company owners and/or directors are responsible for their actions through this entity. When they decide to delegate management to an LLM, wouldn’t they be held accountable for whatever decisions it makes?
While I have great respect for this piece of IBM literature, I will also mention that most humans are not held accountable for management decisions, so I suppose this idea was for a more just world that does not exist.
Accountability is perhaps irrelevant is my point. You can turn off a computer, you can turn off a human. Is that accountability? Accountability only exists if there are consequences.
If accountability is taking ownership for mistakes and correcting for improved future outcomes, certainly, I trust the computer more than the human.
I'd say that the fix then is in creating a more just world where leaders are held accountable than to hand it off to something that, by its very nature, cannot be held accountable.
The really nice thing about this proposal is that at least now we can all stop anthropomorphizing Larry Ellison, and give Oracle the properly robot-identifying CEO it deserves.
This is an interesting and thoughtful article I think, but worth evaluating in the context of the service ("cognitive security") its author is trying to sell.
That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).
A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.
There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.
I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.
Considering things like Palantir, and the doge effort running through Musk, it seems inconceivable that this is not already the case.
I think I'm more curious about the possibility of using a special government LLM to implement direct democracy in a way that was previously impossible: collecting the preferences of 100M citizens, and synthesizing them into policy suggestions in a coherent way. I'm not necessarily optimistic about the idea, but it's a nice dream.
Thanks for the comment. Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.
Sounds like Helios https://www.youtube.com/watch?v=swbGrpfaaaM
Indirectly, this is kind of what I was trying to get at in this weekend project https://github.com/stewhsource/GovernmentGPT using the British commons debate history as a starting point to capture divergent views from political affiliation, region and role. Changes over time would be super interesting - but I never had time to dig into that. Tldr; it worked surprisingly well and I know a few students have picked it up to continue on this theme in their research projects
think we're already there aren't we?
no human came out with those tariffs on penguin island
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION.
Let’s assume we live in a hypothetical sane society, and company owners and/or directors are responsible for their actions through this entity. When they decide to delegate management to an LLM, wouldn’t they be held accountable for whatever decisions it makes?
Management is already never held accountable, so replacing them is a net benefit.
I wonder if that quote is still applicable to systems that are hardwired to learn from decision outcomes and new information.
What (or who) would have been responsible for the Holodomor if it had been caused by an automated system instead of deliberate human action?
While I have great respect for this piece of IBM literature, I will also mention that most humans are not held accountable for management decisions, so I suppose this idea was for a more just world that does not exist.
human CAN and computer CAN NEVER
Accountability is perhaps irrelevant is my point. You can turn off a computer, you can turn off a human. Is that accountability? Accountability only exists if there are consequences.
If accountability is taking ownership for mistakes and correcting for improved future outcomes, certainly, I trust the computer more than the human.
I'd say that the fix then is in creating a more just world where leaders are held accountable than to hand it off to something that, by its very nature, cannot be held accountable.