« Midnight dreary pondering of gift taxes | Main | Meanwhile in Ukraine »

May 09, 2023


Pretty obviously, "tasks that human beings perform" includes tasks performed by management. Middle management, in particular, tend to involve enormous amounts of paper shuffling, interminable meetings to coordinate work, etc. -- in short, tasks which seem, at first glance, tailor made for an AI solution.

Senior management involves a bit more judgement. But also includes big chunks of interfacing with investors, boards of directors, etc. Also automatable, save the parts that require physical handholding. And even those diminish in number as investing becomes automated. In short, AI's "benefits" can be readily applied to management, too. Sauce for the goose and all that.

So, how does that benefit workers. Predictability. You may have had personal experience with managers for whom the term "capricious" was tailor made. Dealing with software instead would eliminate most of that. Yes, the occasional software upgrade could entail new requirements for workers. But still, an overall benefit.

Certainly there are management tasks which require judgement, including consideration of factors beyond the ken of AIs. But then, there are workers' tasks of which the same can be said. And judgement, like inspiration and innovation, is currently beyond AI's capabilities.

Except, wj, that it's management that implements AI, same as it is management that hires McKinsey.

My union could hire McKinsey to look at university management and offer solutions to administrative bloat that would likely free up a lot of overhead and prestige construction that do nothing for educational goals, but my union has no executive power.

I have long been a fan of Benefit Corporations as an alternative to the vampire capitalist model. I've watched many corporations I admired try to create a more ethical and sustainable business model for themselves and their workers. Every one of them has hit a hard limit the moment that they needed finance capital for expansion. Finance capital demanded that they change their charter and abandon the limits they placed on their pursuit of profits. Finance capital demanded that the workers sell their controlling shares in the worker-owned business to a larger corporation before the banks opened up the funds needed to expand.

It's not that they were not profitable, they just weren't profitable enough to satisfy finance capital.

I'm hoping that Patagonia bucks that trend. Time will tell.

I'd love to believe that unions could buy AI that would let them replace management with an ethical machine - a philosopher king, a Solon - but unless the unions could start from scratch to build the enterprise out of raw materials they would need to rely on banks to access capital.

I've also seen enough union politics to wonder if an AI powered union leadership would remain responsive to its membership. I'm a realist there, too.

I gave up on the idea of a technolibertarian utopia a decade ago.

Hope for Chiang's reformist AI, though. A revolution-minded AI is not something I particularly wish to contemplate.

There goes all my Marxist cred.

At first glance, I read the post title as "AI as McKinney." That would probably work pretty well.

Almost all of my major tasks, like creating a call schedule and inputting it into the database, could be automated. (In fact, some residents have been astonished that it isn't: "You do all this **manually**?!").

In fact, I'm hard pressed to think of anything I do that couldn't be done by an AI. Even the compliance stuff, which is mostly data harvesting.

I'm very glad I'm only a few years from retirement!

Except, wj, that it's management that implements AI, same as it is management that hires McKinsey.

My union could hire McKinsey to look at university management and offer solutions to administrative bloat that would likely free up a lot of overhead and prestige construction that do nothing for educational goals, but my union has no executive power.

Ah, nous, but you are being too straightforward.

You don't go to management with that proposal; as you say, they wouldn't be interested. Well, maybe the C-suite guys would be interested in a proposal to slash middle managers. But generally.

Instead, you go the the Board of Directors. Yes, most Boards are made up largely of executives from other companies, who might be self-aware enough to figure out that their company could be next. So you start with one which isn't. (Maybe a non-profit? Perhaps the UC Regents would go for it. The politicians involved listen to union donors, after all.) It's a matter of getting your foot in the door and creating a couple of proof-of-concept examples.

If AI works well, most companies will end up changing just because they must in order to stay competitive. Think of the assembly line. Once it was a radical innovation, and viewed with skepticism; then it became ubiquitous.

Your note about needing banks for capital has interesting implications, too. If AI management improves efficiency, I could see banks requiring it for borrowers. Or, more likely, just charging a premium for companies which don't have it.

Suggesting that the UC Regents would go for it tells me that you have never had to deal with the Regents on this level. It's UCOP and the Regents that push the austerity politics in the UC system, and who side with the most anti-union of the local campuses when it comes to questions of oversight.

I've been involved in contract enforcement work with the union for a while now and have been directly involved in PERB filings, and settlement negotiations, and arbitration decisions.

The Regents are absolutely the people who would hire McKinsey to look for ways to have "greater instructional staffing flexibility" while they approve higher salaries for all the chancellors and the system president.

My assessment was a sausage-level assessment. UCOP is offal.

hsh: At first glance, I read the post title as "AI as McKinney."

Same here. But then I've been asking for a while whether AI could do the work of judges -- Supreme Court "Justices", even.

Does AI take bribes?


Does AI take bribes?

Seems like a more likely issue would be how easy is it to hack?

Suggesting that the UC Regents would go for it tells me that you have never had to deal with the Regents on this level.

Quite true. I merely mentioned them because I happened to be aware that they include some politicians (the Governor, the Speaker of the Assembly, etc.) in addition to various others. A different non-profit might well be better. Having sat on one or twp non-profit boards, I can attest that some of them are not wedded to blindly following management guidance.

Yeah, a non-profit operates more like a b-corp. Again, though, I think the potential crimp is in the finance hose. You'll end up having to crowdsource the budget to put pressure on, and US labor law usually sides with management and puts the burden on the side with fewer resources from the start.

In contract disputes at the campus level, for example, the university has paid professionals with paid staffs arguing their side while the union has one staff member working with union members working as unpaid volunteers. The university has lawyers in their direct employ and the union has to limit how much it uses the lawyer it has on retainer.

Giving everyone in the process access to AI powered legal advice would be a leveling thing, but there is no way in hell that the legal community would let that fly.

Can A.I. do anything to assist workers instead of management?

Well certainly it can, but that would depend on the distributional outcomes of the increase in productivity.

And that is an issue for political economy as they used to call it back in the olden days.

forgot to cite my impeccable source:


It's nice to see someone who understands where productivity fits in the basic solvency model. OTOH, to your distribution point, if the last 30 years is an indication, there's a serious question about whether future productivity gains will be subject to the Social Security payroll tax.

Distributing productivity gains (actually business gross income generally) is an issue, without reference to whatever AI may do. Expect this Gilded Age to see the same reaction as the first one.

Similarly, the tax system needs a makeover. Call me simplistic. But it seems to me that, if you are going to tax income (which decision was made long since), then to put it bluntly: income is income. Doesn't matter if it is wages, salary, consulting fees, stock options, dividends, interest, capital gains, inheritance,** whatever -- income is income. And should be taxed equally (at whatever level of the progressive tax system that ends up at).

Ala nous, there goes my rapacious capitalist cred. :-)

** For those who whine about what that would do to family farms (to the extent they still exist) or other family businesses: so give your kids partial, growing, interest in the business while you're still alive.

You inadvertently left gifts to supreme-court judges off your list

I suspect Supreme Court justices wouldn't like strict scrutiny being applied to them any more than they like to apply it to some of the cases they rule on.

At first glance, I read the post title as "AI as McKinney."

Just a note, in general, I imagine a line between posts and comments. It's fine to acknowledge a poster, especially in a good way, but pulling a poster up to carry on a debate, or worse, start a new one, is unfair because of the asymmetry. I've probably made an arch reference to someone's comment in a post, but as a rule, I try not to.

I came across this quip today about my area of work and AI

"To replace creatives with AI, clients will need to accurately describe what they are looking for.

We're safe."

After an underwhelming rollout, Google's chatbot is shaping up to be very competitive with the other chatbots. Plus results can be exported to Google's other applications without having to do a copy and paste.


This is a good overview of Bard.

"Google finally responds to the market pressures from Microsoft and OpenAI. In this episode, we take a look at how their AI plans were unveiled during the Google I/O developer conference."
Google Strikes Back (The AI Wars) (YouTube)

ChatGPT is progressing beyond being a parlor trick to being a platform for accomplishing a number of different kinds of tasks.

ChatGPT'S New 'Code Interpreter' Shocks Everyone! : (Upload Files, Edit Videos + Examples, Full Guide)

Talking of creatives (novakant was, and Charles wasn't), some of the pieces on Amis are wonderful. I loved this, from a James Parker piece in the Atlantic in 2012, which was linked in one of the many pieces now being written about him. Apparently, when a newspaper (the Guardian? the Times?)put out a call to writers for pieces about him after his death, they were absolutely swamped:

In this state, it can be hazardous to read Martin Amis—to suffer the thrills of envy (I want it!), larceny (Can I steal it?), resentment (Bastard!), all leading where? Ah, you know where: into a writer’s dark night, the meat-locker chill of professional despair. The ego, inverted. I might as well give up. Pete Townshend and Eric Clapton, watching Jimi Hendrix at The Scotch of St James in London, were (according to Townshend) so harrowed with fear and wonder that they found themselves meekly holding hands. The apprentice writer reads Martin Amis, and whose hand can he hold but his own?

So many of the people writing about him acknowledge the effect he had on their style, and literary fiction in general. And his literary criticism and essays were, it is true, great. But what some have stressed was his moral seriousness, which was an inescapable aspect of his thought to anybody paying attention. I can't remember whether it was in this thread or not, but I think it was novakant who referred to Amis's regrettable remarks about Islamism, and in fact Muslims, in the wake of the terrorist attacks of the early 2000s. I say regrettable, because he openly regretted and apologised for them.


Dammit, I posted this on the wrong thread. Meant to put it on the current OT, which I will now do.

The comments to this entry are closed.