A few months ago, I wrote a post addressing the use of artificial intelligence in the boardroom.  While the focus of the post was using AI (or not) to draft minutes, I also asked (with apologies for quoting myself), “whether a robot equipped with AI can serve as a director.”  At the time, I thought the question was highly speculative if not absurd, so you can imagine my surprise when I came across this article, in which a CEO is quoted as saying that she’s open to the possibility of having a bot on her board.

As the author of the article writes, “[o]f course, the idea of having an AI board member opens up a host of thorny ethical issues. What would happen if the AI recommended a strategy that goes south? Or relies on biased data to make a decision?”  I agree, but it also would raise fundamental questions about corporate governance.  For example:

  • One of the cornerstones of governance is that directors can be personally liable for certain misconduct.  That rarely happens, but experience suggests that the risk of having to shell out big bucks for not minding the store is something directors think about.  Could a bot be subjected to personal liability?  How?
  • Would a bot comprehend – fully or at all – what “fiduciary duty” means, or be able to evaluate the complexities involved in determining the best interests of the corporation?  I’m sure that someday – maybe even very soon – they answer to both questions may be “yes,” but are we there yet?
  • Could a bot understand the distinction between the board’s ultimate role – that of oversight – versus managing (or micromanaging)? 
  • How would having a bot on the board impact that prized commodity, collegiality?  Could a bot take “no” for answer?  What would a board discussion be like if the bot insists that it is right despite contrary views from other directors?

And so on.

More basic technological concerns also come to mind.  We have all heard, and many of us have actually experienced, that AI makes mistakes – sometimes big ones.  I also assume that bots can be hacked; aside from enabling outsiders to listen in on board matters or get access to confidential or even privileged information, could a bot be reprogrammed to make decisions that harm the company to the benefit of an outside party?

The article appropriately (if humorously) notes that “given that the average director of an S&P 500 company made $336,352 in total compensation last year…, adding a bot to a board may be a better deal than you think.”  I’m not sure what the going price of a bot and the cost of maintaining it may be, and you can call me a luddite, but at least for now I’ll take human directors.