Wednesday, April 1, 2026

What I know about AI:

As a serial-attempted-AI-early-adopter, I've been repeatedly burned, but that's not helpful for AI, where circumstances are changing fast enough to make previous experience irrelevant. Of course, after five years of hype, all AI is necessarily over-sold, so while it's almost necessary to be skeptical, that same skepticism can blind us to the very-new-and-novel things AI can do this month it that it couldn't last month. 

Don't trust it--it will confidently lie to you about anything.  It has no understanding, and no continuity - it will tell you one thing one day, and something else day. I would never trust AI for any technically factual information. It has a deep well of "Encyclopedia" knowledge (thanks Wikipedia) but anything that requires specialized expertise, it's knowledge base is that of a Reddit commentor. 

Don't trust AI to tell you facts. It does not care about truth or falsehood, only plausibility. AI has 'Encyclopedia Knowledge'. It only knows what it was trained on, and that knowledge only dates up to a certain point.  You can never trust the factuality of AI. Those little AI summaries Google and Bing pump out are just whatever the top ten websites say blended together. Which (thanks to SEO) is already AI slop itself. My experience working with AI summaries is that they weren't so much wrong as deeply basic, first-year undergrad basic. (AI is trained on the web, and web content is produced by average humans, after all). 

The metaphor I hear is for AI is a really good junior analyst - fast, tireless, but completely lacking in judgment, so you have to review  everything it does--you can't trust it. It's impressively good at summary... but then you never develop the latent memory that serves as guard-rails on doing absurd things.

 Judge AI by what it does best - write code. That's where it shines, because it's got a big codebase, and its a tightly bounded activity. It's gotten vastly better in the past year. A year ago, it would give me code that would take longer to debug than to write from scratch. For R code, my outputs improved radically when I started listing the packages/libraries I wanted Claude.AI--solved the maddening case where it would mix up functions with the same name. Second useful bit has been copy-pasting the error code in -- people have been doing that for years on stack.exchange.com and other forums, so the training set is robust and it helps rapidly identify the correct problem and solution. 

My friend is a programmer, and I asked if he was scared of AI and he laughed--he said most of his work is fixing and maintaining bugs in existing software, and AI is terrible at that, because it's all unique and novel. The programmers who are scared are the folks who make new code--who make apps. For those folks, vibe-coding is in real danger of putting them out of a job. But the largely unforeseen aspect will be that "software will eat everything" as it becomes feasible to automate anything vaguely algorithmic, where the cost of doing so was previously prohibitive. 

AI use in education is a cruel  joke; people pay money to go to college to learn, and learning consists of developing your own capabilities by doing arbitrarily hard things, which AI entirely short-circuits, and eventually we are going to wind up going back to oral exams, which both actually test what people have learned and develop relevant life-skills. Which has implications for our own use of AI - it's doing the work for us, but we are denying ourselves the learning. 

But AI is good at doing things fast, and I think we've reached the point where AI+iterative fixes will, for the same number of hours, generate a superior product. That we will be able to do things in less time is delusional--clients have a budget and if we offer a product for less, they will be suspicious that we've cut corners. Hence, what we'll see is a rise in standards. I recall the quality of pre-computer studies (done on typewrite!) versus word processors. So as hard things got easier to do, quality expectations rose. And I think we'll see the same thing with AI--we'll be expected to review thousands of pages of documents in a way that was never reasonable before. 

I think the best niche for AI may be proposals, where it's ability to Google things and make up something that sounds plausible and matches the format, in a way that is responsive to the RFP. However, I'm very curious about how AI can be used to pull things in from 'my' knowledge base (market quals, resumes, past project descriptions) and chunk that out as a proposal. Indeed, I'm hoping to learn today how to use AI on the CAMPO proposal text. And it would be amazing to be able to have a 'knowledge base' of files I can just check into AI.  My brother, who is deeply into AI, strongly suggests using multiple AIs--outputs of one as inputs of the other, iterate back and forth. 

"AI has given me specific things that I can then go and validate... it tells me what it sees and then I interrogate it further". "You have to know why you are asking the question"....When we bring in our professional expertise, to collaborate with AI,... can do more than if I just cut and paste..... iterative and Socratic process". - CD

Sounds like knowing how to 'interrogate' an AI to test and check what its pumped out, using professional judgment, is now the 2025 equivalent of knowing how effectively Google things. 



No comments:

Post a Comment

And your thoughts on the matter?