You learn things in the strangest ways...
We took a surprise trip to the NJ/PA border to look at some apartments for a friend. She lives in Georgia and is thinking of relocating. We decided to use this as an opportunity to go bike riding along the Delaware Canal, and make a day of it.
Bike rides aren't all that interesting but are great exercise both physically and mentally (riding gives you tons of time to concentrate). I spent a great deal of the ride thinking about a person we'd met at one apartment. She said was a writer and a professor. A writer of anything I might attempt to read? Why yes, it turns out. She is a 'futurist' and writes about Artificial Intelligence. A topic which is changing my job on a daily basis. I told her I knew quite about AI, and look forward to the day it replaces me. She looked at me quizzically and said "Really? That's strange, most people would fear it. Besides, we have to hope it comes with a Universal Basic Income."
I simply looked at her and said "No, I don't fear it. I've studied history enough to realize change is good. The Industrial Revolution destroyed some jobs, it's true. But it created many more, and those jobs paid better. It also created new industries altogether. I see the same thing with AI. After all, AI is great, but it will probably always be better with humans working in tandem, rather than as a standalone, though some standalone items may exist. Overall I see more jobs coming from it, not fewer. Training is what needs to improve, not payoffs to those who don't want to learn."
I didn't get into a deeper discussion, since I wanted to ride my bike. She seemed amazed I was so nonplussed. Actually, I think she was surprised to meet anyone willing to discuss the topic but shocked at my indifference to her perceived negative consequences. My reasons are based on economics, but also her personal story, which made my ride a mental exercise.. She espoused a point of view which may seem to make sense, but her behaviors told a very different story.
This woman was very nice, friendly, talkative, well-educated and running her home and runs several small businesses on a 10-15 acre 'farm' property near the Delaware River. I wouldn't call it a real farm. It's a tax farm. In other words, she's getting a tax break for having animals on her property, and selling their products. She has several peacocks ($600 for a white peacock or peahen chick, $200 for non-white) for breeding, a pig, about 10 ducks (for eggs and food), a henhouse with quite a few chickens (she said her new tenant would have all the fresh eggs she wanted for free, but she also sells the eggs to a farm collective in town), 'fainting' goats (for milk and meat - but also breeding, apparently 'fainting goats' get a premium as pets).
There are mixed goals in all these animals, and the rental of the apartment. To start with, the 'farm' is a nice tax break. Reduced her taxes substantially. For another, her children love the animals, so they take care of them. It's a good learning experience for them to take care of things, as well as learning to work. Finally, the animals pay for themselves through breeding and sale of product. Even so, the thrust of her entire conversation was about taxes and the need to "not pay those incredibly high rates - after all, we came here from New Jersey where the taxes are exorbitant." (Being from NJ, I agreed.)
Like most people, she doesn't like paying taxes. In fact, she built the apartment to pay the taxes which weren't covered by her 'farm'. Good business sense. I have to hand it to her, she did the homework and the legwork and was making decent money.
Which is why her desire for a Universal Basic Income made me cringe. She's a very productive member of society. She is clearly working hard. She is very intelligent. So her 'futurist' solution to her perceived issues with AI simply made me realize something. She's two-faced and a lazy thinker.
A 'futurist' shouldn't take the easy way out and say "well, this is going to change how we think and work, so the solution is as simple as X." X, in this case, being UBI provided by the government. That's easy. Anyone can say that. It requires very little thought to decipher how she's thinking the future is going to go - AI does everything, jobs are scarce, people can't work or earn an income. Not much of a future, if you ask me. Clearly not much of one to the Swiss, who overwhelmingly rejected the idea (it's no surprise the article here has links through to stories on AI...the two concepts are being intertwined by leftist journalists). However, as much as the UBI seems like a leftist concept, some Libertarians seem to support it, though most do not. Libertarians, ever practical, recognize that if there are only two options offered, you take the one which is more practical. Even Milton Friedman opposed welfare, but felt if you had to have welfare, it should be a cash payment. That, essentially, is the reason why many on even the right are supporting UBI. However, again, I consider that very lazy thinking.
First of all, if you believe UBI is the 'solution' (to a problem that hasn't happened yet) then you are simply saying "let's throw money at a perceived problem, because money solves everything." Except it doesn't and never has. The examples abound of people saying money will solve an issue that it hasn't solved. Poverty, for example. Poverty remains a problem today, 50 years after the Great Society was going to abolish it. Of course, in the years prior to the Great Society, poverty rates were falling. Nevermind that, we'll solve it with money rather than productive capability. Granted the reason poverty exists is because the normative definition keeps changing with time, but the money spent isn't what eliminated poverty as it was known 50 years ago. It has been productivity which has allowed poverty, as it was known 50 years ago, to be diminished.
Secondly, if you want to throw money at a problem to 'solve' it, at least be willing to pay for it. Her desire to avoid paying taxes (perfectly acceptable, in my mindset) proves she doesn't believe UBI is a responsibility she has to pay for. In fact, she - like anyone else - will seek ways to avoid paying for it. That's the American Way. I applaud her. But stop telling me that I'll be paying for people you think will need my money.
Finally, I firmly believe AI will create more jobs than it destroys, and many will be to make AI work better. Just like any technology that destroyed jobs, there are other jobs and industries which will be created, new niche industries which will rise up. In 1800, who would have assumed that 200 years the industrial revolution would lead to actors and sports stars would be among the wealthiest and highest-earning people in the nation? Who foresaw the rise of the leisure industry?
In my town, 200 years ago, there was one tavern and one inn/restaurant. Today there are many. Industry increased the need/desire for these. 200 years ago travelers tended to stay with friends/family and meals were cooked at home. The Industrial Revolution shifted that and now 'going out' is actually a community exercise. I pointed out to my wife that while we grew up movies were a community event, a chance to go out, see and be seen. Today, movies are largely ingested privately by streaming devices. Going out to the movies is less the event it was while I was growing up. More young people are spending their money on social events which revolve around eating/drinking and 'experiences' (my own son went sky-diving, and I am envious, though I may beat him to bungee-jumping).
As a futurist, I see a rise in new opportunities related to leisure and personal interaction. There will be an increased need for social activities and 'experiences' that will spawn new jobs. Fitness Clubs (or whatever follows them) will become heavily staffed AI driven organizations that pay well to help people socialize as they stay in shape. A personal trainer I know makes about half what I do - and works half as long, setting his own hours. No AI is going to replace him (except for the highly motivated).
Economics is often discussed as a means of managing scarcity. Technological progress, and by default AI, is about solving for scarcity and providing abundance. With abundance does Economics change (the matching of scarce resources to unlimited wants/needs)? No, it shifts slightly. People will find new wants and needs which are scarce, and which may be partially filled by AI, but never completely managed by it. There will always be a role for people, particularly those who learn to interact with AI in meaningful ways.
My futurist acquaintance addressed that tangentially, saying "I worry about the day when AI and robots are smarter than humans, then we'll have a problem." I didn't disagree with her then, since I wanted to get on my bike and ride. But I will here. She is wrong. If that does happen, we'll be happier. Dystopian futures have never come to fruition, and the solving for scarcity simply means we'll either live in a technological utopia (doubtful) or a new world where we learn to work closely with technology that is always working for us more efficiently, but will always need a human to assist in some way.
If it does come to pass that robots and AI become smarter and more abundant and humans slowly evolve away? Where's the harm in that? I'm not saying I want that to happen, but if it did - who is to say that's not what was meant to be? Without any humans, can a crime have been committed in this eventuality? No. But it's a long-shot eventuality, and I would spend more time pondering a future that keeps improving rather than one where we throw money at non-existent problems.
Postscript 7/13/2017: Cafe Hayek addresses the Robot Revolution