Wednesday, April 15, 2026

Why Public Meetings are Terrible - A Kludge Stack of Anti-Design

Using scheduled in-person public meetings as your default method of public engagement is terrible -- practically designed to generate terrible outcomes. An open meeting with a public comment period is purely a forum for the naysayers to congregate and berate the city council, why creating an echo chamber where the naysayers get their view confirmed, generating a bitter group of people who (absurdly) claim the council ignored 'public opinion' and that democratic processes are being ignored. Which is absolute catnip for a specific variety of concern-trolling-click-bait-publishing 'local' journalist. 

We mandate public meetings because we wanted to do away with decisions being made in smoke-filled rooms on a nebulous basis. And then extended it to include public comment because meetings otherwise just become announcements of the policy made in smoke-filled rooms. But it's trivial to ignore comments made in public meeting, so we mandated that they be collected and made a matter of public record. And so we arrived at the Decide-Announce-Defend paradigm anyone who works in infrastructure is familiar with. 

We should recognize that this format is a sub-par outcome, a Rube Goldberg kludge of fixes-on-fixes, and not something any sane person would design to actually engage the public. 

Being a public official is tough--constantly asked to make important decisions on a multiplicity of topics on which lack technical expertise, and so you are really reliant on: A) what you staff says, and B) what your personal network says. So it is easy to get a skewed view of what the public supports or will at least accept, and how widespread opposition actually is. And public meetings only serve to make this worse, because of the availability bias---the only people you'll see is the NIMBY chorus, so opposition appears firm and monolithic. 

Which is one of the reason activism is so effective - just having one person show up at a meeting with a contrary point of view destroys the illusion created by availability bias. This dynamic works both ways--it's also the same technique climate denialists have used exploited to artificially induce doubt about an overwhelming scientific consensus. 

This also offers implications for how to effectively present to a public audience. Rhetorically, it's called "planting a naysayer" a. While you are presenting, you say: "My opponent will say" and then refute it. It's an incredibly effective technique, because it ties your refutation to any further reference of that issue. Example:

"Folks here will say that building more houses doesn't improve affordability, but the empirical evidence is clear: places that built more apartments have more affordable rent. The bigger the city, the more apartments you have to build to move the needle. Of course the effects of a scant handful of new homes is imperceptible". 

So when it comes to effective activism, it's really effective to have someone rhetorically competent go to any public meeting where public comment is permitted, stand up, and give their spiel. So if you want to spend money changing the world, that's an effective way to do it. 



Monday, April 13, 2026

South-of-Emigration-Creek-City?

Looking at the Salt Lake City Zoning Map [1], I idly wonder if Salt Lake wouldn't be a better off if it just transferred (de-annexed) everything south of Emigration Creek and East of 1700 E.  It's the part of the city characterized by houses zoned for 7000 SF and even 12,000 SF lots. It certainly complicates city politics. 

I suspect it's a non-starter for financial reasons--what city would want to lose the taxable revenue from the affluent suburbs its annexed? But I also suspect were Urban3 to do an analysis of SLC, it would show that most of the actual money comes from the urban core. (Lots of expensive property doesn't always generate a lot of revenue due to things like homestead exemptions). Which represents a substantial shift - for a long time, land values (and land uses) have so low that controlling affluent areas seemed like a win. But as suburbia ages, and the infrastructure renewals costs roll in, I suspect that may less the case. I doubt 'South-of-Emigration-City' would be financially sustainable on its own - not enough commercial development, not enough density. Perhaps it could join up with the City of Millcreek? 

I suppose it you were an arch-capitalist, you'd cut Salt Lake City down to the revenue generating parts, and de-annex the rest, and create something like the City of London. Not very politically viable though - the Utah Legislature would just bully it like they did with the Inland Port Overlay District, but worse. So perhaps that explains why two very different polities exist in one city--mutual defense. 

[1] I often genuinely forget that Salt Lake City extends south of I-80. It's a part of the city I never think about and almost never visit. Likewise, the part of South Salt Lake that extends north of I-80 always seems faintly absurd. Partially it's because SLC was so rigorous in annexing everything north of the 201 to the west. 

"The categories and funding levels represent the goals and visions of the region"

Reading a prior LRTP and I feel strangely humbled. I don't think I've ever seen a clearer connection made between funding and goals. Too often, the connection between goals and funding is nebulous, and only linked by project prioritization schema. 

Too often, the financially constrained work program is just a laundry list of widening projects where the model forecast congestion, with intersection improvements where widening seems too expensive, with a dash of safety projects where public outcry has been loud enough, and whatever Bike/Ped projects dedicated activists can drive through.

By deciding in advance what mattered and how much, it financially constrained not just the whole plan, but specific types of projects within the plan. And only then was the ranking of specific projects considered. 

#--------------------------------------------------------------------------------------------------------------------

Before ranking and selecting the submitted projects for the LRTP, the TAC set a guide for the funding level of the different types of transportation projects. Eleven different categories were reviewed for funding. The categories were: 

  1. 1. Safety Intersection 
  2. 2. Geometric Intersections 
  3. 3. Corridor Safety Improvement 
  4. 4. Road Widening < 5 Miles 
  5. 5. Road Widening > 5 Miles 
  6. 6. Bike/Ped Facilities 
  7. 7. Transit 
  8. 8. Resurfacing Primary 
  9. 9. Resurfacing Secondary 
  10. 10. Bridge 
  11. 11. New Roadway 

The TAC approved distribution of the COG’s LRTP funding of approximately $125M between seven categories. The categories and funding levels represent the goals and visions of the region. Below are the total categories, the percent of the total funding and the approximate amount of funding available over the life of the LRTP (25 years.)



Planning Mantra

 "The future is already here — it's just not very evenly distributed" - William Gibson

As a planner, you don't need to invent anything. City-making is an old technology, and while incremental improvements are constantly ongoing, it's pretty stable. So if you've got a problem, there is a near certainty that someone else has had that same problem (potentially decades or even centuries ago) and has already solved it. 

John Forester is dead

John Forester"the father of vehicular cyclingdied in 2020. 

Right now, his legacy is 'controversial'. I suspect in a generation his perspective will be more analogous to the miasma theory of disease. Research is pretty clear that many fields fail to move forward while their founding 'big man' continues to live, publish, and advocate. 

I understand where Forester came from -- I was a 3% 'confident and assured' cyclist playing chicken with cars basically up until the moment I started cycling with my wife. But 'vehicular cycling' functionally limited cycling participants to a tiny portion of the population, which limited its political support, which limited infrastructure development, which leads to our present mess. 

"You aren't a real cyclist unless you can ride in traffic" is bunk and always has been. "You aren't a real X unless you can do Y" is exclusionary gatekeeping. Vehicular cycling as a motorism--the idea that other modes can reach parity with cars by pretending to be cars.

Maybe vehicular cycling made sense when Forester was a teen, when the default car was the Morris Minor and there was a nationwide 30 mph speed limit. But in America, the most popular cars is a Ford F-150 with much higher mass, much better acceleration, and traveling at much higher speeds. 






Wednesday, April 8, 2026

Use Surveys to Give Courage to Public Officials

One of the real virtues of a public survey is to provide courage for public officials as they make politically charged decisions.

Being a public official is tough--constantly asked to make important decisions on a multiplicity of topics on which lack technical expertise, and so you are really reliant on: A) what you staff says, and B) what your personal network says. So, it is easy to get a skewed view of what the public supports or will at least accept, and how widespread opposition actually is.


Something to remember next time you design a rider survey--be sure to include a question that addresses political questions like coverage vs. frequency.


Most people also firmly hold a wide variety of beliefs based on anecdotal evidence. Empirical data, empirical data, when presented with a compelling narrative and selected representative anecdotes, can do a lot to change minds.




Monday, April 6, 2026

Nothing Bore Fruit

Cincinnati's abandoned subway is a cautionary tale about the dangers of planning rapid transit based on corridor availability rather than demand. Admittedly, the Great Depression was an absolute reaper for in-process transit capital projects. But the fact that it was 'originally envisioned as a loop' is an immediate red flag on the 1925 planning. And.... some quality time with Google map suggested that 1925 loop was itself planned on the basis of corridor availability -- cobbling together a series of available existing railroad corridors (B&O, PRR) along with the canal path, with some handwaving on the connection through downtown.  So even if the subway segment was a good idea, the plan in which it was embedded was suspect. 

But that's not an issue specific to Cincinnati - planning from that era in general is suspect--nothing bore fruit--there were zero new rapid transit systems between 1908 and 1972 (although a few cities (Cleveland, PATCO) converted railroads or streetcars to rapid transit systems during that time).


Wiith regard to the project's continued abandonment. An amazing corridor exists, but thanks to Urban Renewal, the conditions that made it a good idea in 1925 no longer exist. 

Which is a problem. People suggest things like "Oh we have an abandoned rail line, you should add light rail" or "We had streetcar here in 1925, so you should put it here in 2025" as if land use isn't fundamental to transit feasibility. 

Relation between building typologies & density

 The world needs nice diagram out there illustrating the accessibility trade-off between space devoted to providing access (highways + parking) and the land being provided access. In the meantime, this graphic does a pretty good job:

Original image source seems to be:



Wednesday, April 1, 2026

What I know about AI:

As a serial-attempted-AI-early-adopter, I've been repeatedly burned, but that's not helpful for AI, where circumstances are changing fast enough to make previous experience irrelevant. Of course, after five years of hype, all AI is necessarily over-sold, so while it's almost necessary to be skeptical, that same skepticism can blind us to the very-new-and-novel things AI can do this month it that it couldn't last month. 

Don't trust it--it will confidently lie to you about anything.  It has no understanding, and no continuity - it will tell you one thing one day, and something else day. I would never trust AI for any technically factual information. It has a deep well of "Encyclopedia" knowledge (thanks Wikipedia) but anything that requires specialized expertise, it's knowledge base is that of a Reddit commentor. 

Don't trust AI to tell you facts. It does not care about truth or falsehood, only plausibility. AI has 'Encyclopedia Knowledge'. It only knows what it was trained on, and that knowledge only dates up to a certain point.  You can never trust the factuality of AI. Those little AI summaries Google and Bing pump out are just whatever the top ten websites say blended together. Which (thanks to SEO) is already AI slop itself. My experience working with AI summaries is that they weren't so much wrong as deeply basic, first-year undergrad basic. (AI is trained on the web, and web content is produced by average humans, after all). 

The metaphor I hear is for AI is a really good junior analyst - fast, tireless, but completely lacking in judgment, so you have to review  everything it does--you can't trust it. It's impressively good at summary... but then you never develop the latent memory that serves as guard-rails on doing absurd things.

 Judge AI by what it does best - write code. That's where it shines, because it's got a big codebase, and its a tightly bounded activity. It's gotten vastly better in the past year. A year ago, it would give me code that would take longer to debug than to write from scratch. For R code, my outputs improved radically when I started listing the packages/libraries I wanted Claude.AI--solved the maddening case where it would mix up functions with the same name. Second useful bit has been copy-pasting the error code in -- people have been doing that for years on stack.exchange.com and other forums, so the training set is robust and it helps rapidly identify the correct problem and solution. 

My friend is a programmer, and I asked if he was scared of AI and he laughed--he said most of his work is fixing and maintaining bugs in existing software, and AI is terrible at that, because it's all unique and novel. The programmers who are scared are the folks who make new code--who make apps. For those folks, vibe-coding is in real danger of putting them out of a job. But the largely unforeseen aspect will be that "software will eat everything" as it becomes feasible to automate anything vaguely algorithmic, where the cost of doing so was previously prohibitive. 

AI use in education is a cruel  joke; people pay money to go to college to learn, and learning consists of developing your own capabilities by doing arbitrarily hard things, which AI entirely short-circuits, and eventually we are going to wind up going back to oral exams, which both actually test what people have learned and develop relevant life-skills. Which has implications for our own use of AI - it's doing the work for us, but we are denying ourselves the learning. 

But AI is good at doing things fast, and I think we've reached the point where AI+iterative fixes will, for the same number of hours, generate a superior product. That we will be able to do things in less time is delusional--clients have a budget and if we offer a product for less, they will be suspicious that we've cut corners. Hence, what we'll see is a rise in standards. I recall the quality of pre-computer studies (done on typewrite!) versus word processors. So as hard things got easier to do, quality expectations rose. And I think we'll see the same thing with AI--we'll be expected to review thousands of pages of documents in a way that was never reasonable before. 

I think the best niche for AI may be proposals, where it's ability to Google things and make up something that sounds plausible and matches the format, in a way that is responsive to the RFP. However, I'm very curious about how AI can be used to pull things in from 'my' knowledge base (market quals, resumes, past project descriptions) and chunk that out as a proposal. Indeed, I'm hoping to learn today how to use AI on the CAMPO proposal text. And it would be amazing to be able to have a 'knowledge base' of files I can just check into AI.  My brother, who is deeply into AI, strongly suggests using multiple AIs--outputs of one as inputs of the other, iterate back and forth. 

"AI has given me specific things that I can then go and validate... it tells me what it sees and then I interrogate it further". "You have to know why you are asking the question"....When we bring in our professional expertise, to collaborate with AI,... can do more than if I just cut and paste..... iterative and Socratic process". - CD

Sounds like knowing how to 'interrogate' an AI to test and check what its pumped out, using professional judgment, is now the 2025 equivalent of knowing how effectively Google things.