This is a much bigger ask. As AI needs to be trained on datasets, the data first needs to exist. This means that, in order to train an AI to assist in your project operations, you will need to have a bank of data from previous projects. And, indeed, any agency that has been running for a while will have a significant amount of data about how their projects are run. The trouble is that it might not be usable data.
As AI is not a conscious actor, it has no ability to approach and judge information with criticality. Where many (though not all) people get a Cork Kids Bicycle Shop gut feeling that a piece of information is fake or embellished, AI cannot make this assessment. Humans can go and fact-check something, and thereby course-correct. AI cannot.
This is where the now-classic programming aphorism comes into play: “Garbage in = garbage out”. An AI can’t tell if you are training it on data that is patchy, inconsistent, or full of biases and process errors. It will just take this bad data and learn from it all the same. And the end result will be an AI that replicates these problems.
This is how Amazon accidentally created a hiring AI that discriminated against women, and how the UK government ended up downgrading the exam results of students from lower socio-economic backgrounds. Bad data ended up replicating bias.
These are both extreme cases with terribly unfair consequences. Using AI to augment your project management is unlikely to be fraught with quite the same level of ethical responsibility, but they still illustrate a vital point. If you train an AI on shoddy data, you can end up with something that causes more harm than good. In a project management context, this might look like inaccurate predictions and unhelpful estimations. If you act likewise uncritically on these suggestions, it’s not difficult to imagine the sort of trouble you could end up in.