AI can hallucinate. Always check for accuracy

AI can make errors and perceive patterns that don't exist. This occurs inherently by their design.

Datasets used to train models such as ChatGPT, DeepSeek and Copilot are skewed towards general knowledge.

This is why domain experts will inevitably spot mistakes in AI outputs when the question requires highly specific knowledge.

AI can be very convincing when it's bullshitting, generating reasonably sound explanations out of a thread of information, so it can take a discerning eye.

Well-targeted prompts can garner better outputs, but its effectiveness will always be limited by insufficient training and data in a subject matter.