Maggie's FarmWe are a commune of inquiring, skeptical, politically centrist, capitalist, anglophile, traditionalist New England Yankee humans, humanoids, and animals with many interests beyond and above politics. Each of us has had a high-school education (or GED), but all had ADD so didn't pay attention very well, especially the dogs. Each one of us does "try my best to be just like I am," and none of us enjoys working for others, including for Maggie, from whom we receive neither a nickel nor a dime. Freedom from nags, cranks, government, do-gooders, control-freaks and idiots is all that we ask for. |
Our Recent Essays Behind the Front Page
Categories
QuicksearchLinks
Blog Administration |
Monday, July 10. 2017AI and Universal Basic IncomeYou learn things in the strangest ways... We took a surprise trip to the NJ/PA border to look at some apartments for a friend. She lives in Georgia and is thinking of relocating. We decided to use this as an opportunity to go bike riding along the Delaware Canal, and make a day of it. Bike rides aren't all that interesting but are great exercise both physically and mentally (riding gives you tons of time to concentrate). I spent a great deal of the ride thinking about a person we'd met at one apartment. She said was a writer and a professor. A writer of anything I might attempt to read? Why yes, it turns out. She is a 'futurist' and writes about Artificial Intelligence. A topic which is changing my job on a daily basis. I told her I knew quite about AI, and look forward to the day it replaces me. She looked at me quizzically and said "Really? That's strange, most people would fear it. Besides, we have to hope it comes with a Universal Basic Income." I simply looked at her and said "No, I don't fear it. I've studied history enough to realize change is good. The Industrial Revolution destroyed some jobs, it's true. But it created many more, and those jobs paid better. It also created new industries altogether. I see the same thing with AI. After all, AI is great, but it will probably always be better with humans working in tandem, rather than as a standalone, though some standalone items may exist. Overall I see more jobs coming from it, not fewer. Training is what needs to improve, not payoffs to those who don't want to learn." I didn't get into a deeper discussion, since I wanted to ride my bike. She seemed amazed I was so nonplussed. Actually, I think she was surprised to meet anyone willing to discuss the topic but shocked at my indifference to her perceived negative consequences. My reasons are based on economics, but also her personal story, which made my ride a mental exercise.. She espoused a point of view which may seem to make sense, but her behaviors told a very different story. This woman was very nice, friendly, talkative, well-educated and running her home and runs several small businesses on a 10-15 acre 'farm' property near the Delaware River. I wouldn't call it a real farm. It's a tax farm. In other words, she's getting a tax break for having animals on her property, and selling their products. She has several peacocks ($600 for a white peacock or peahen chick, $200 for non-white) for breeding, a pig, about 10 ducks (for eggs and food), a henhouse with quite a few chickens (she said her new tenant would have all the fresh eggs she wanted for free, but she also sells the eggs to a farm collective in town), 'fainting' goats (for milk and meat - but also breeding, apparently 'fainting goats' get a premium as pets). There are mixed goals in all these animals, and the rental of the apartment. To start with, the 'farm' is a nice tax break. Reduced her taxes substantially. For another, her children love the animals, so they take care of them. It's a good learning experience for them to take care of things, as well as learning to work. Finally, the animals pay for themselves through breeding and sale of product. Even so, the thrust of her entire conversation was about taxes and the need to "not pay those incredibly high rates - after all, we came here from New Jersey where the taxes are exorbitant." (Being from NJ, I agreed.) Like most people, she doesn't like paying taxes. In fact, she built the apartment to pay the taxes which weren't covered by her 'farm'. Good business sense. I have to hand it to her, she did the homework and the legwork and was making decent money. Which is why her desire for a Universal Basic Income made me cringe. She's a very productive member of society. She is clearly working hard. She is very intelligent. So her 'futurist' solution to her perceived issues with AI simply made me realize something. She's two-faced and a lazy thinker. A 'futurist' shouldn't take the easy way out and say "well, this is going to change how we think and work, so the solution is as simple as X." X, in this case, being UBI provided by the government. That's easy. Anyone can say that. It requires very little thought to decipher how she's thinking the future is going to go - AI does everything, jobs are scarce, people can't work or earn an income. Not much of a future, if you ask me. Clearly not much of one to the Swiss, who overwhelmingly rejected the idea (it's no surprise the article here has links through to stories on AI...the two concepts are being intertwined by leftist journalists). However, as much as the UBI seems like a leftist concept, some Libertarians seem to support it, though most do not. Libertarians, ever practical, recognize that if there are only two options offered, you take the one which is more practical. Even Milton Friedman opposed welfare, but felt if you had to have welfare, it should be a cash payment. That, essentially, is the reason why many on even the right are supporting UBI. However, again, I consider that very lazy thinking. First of all, if you believe UBI is the 'solution' (to a problem that hasn't happened yet) then you are simply saying "let's throw money at a perceived problem, because money solves everything." Except it doesn't and never has. The examples abound of people saying money will solve an issue that it hasn't solved. Poverty, for example. Poverty remains a problem today, 50 years after the Great Society was going to abolish it. Of course, in the years prior to the Great Society, poverty rates were falling. Nevermind that, we'll solve it with money rather than productive capability. Granted the reason poverty exists is because the normative definition keeps changing with time, but the money spent isn't what eliminated poverty as it was known 50 years ago. It has been productivity which has allowed poverty, as it was known 50 years ago, to be diminished. Secondly, if you want to throw money at a problem to 'solve' it, at least be willing to pay for it. Her desire to avoid paying taxes (perfectly acceptable, in my mindset) proves she doesn't believe UBI is a responsibility she has to pay for. In fact, she - like anyone else - will seek ways to avoid paying for it. That's the American Way. I applaud her. But stop telling me that I'll be paying for people you think will need my money. Finally, I firmly believe AI will create more jobs than it destroys, and many will be to make AI work better. Just like any technology that destroyed jobs, there are other jobs and industries which will be created, new niche industries which will rise up. In 1800, who would have assumed that 200 years the industrial revolution would lead to actors and sports stars would be among the wealthiest and highest-earning people in the nation? Who foresaw the rise of the leisure industry? In my town, 200 years ago, there was one tavern and one inn/restaurant. Today there are many. Industry increased the need/desire for these. 200 years ago travelers tended to stay with friends/family and meals were cooked at home. The Industrial Revolution shifted that and now 'going out' is actually a community exercise. I pointed out to my wife that while we grew up movies were a community event, a chance to go out, see and be seen. Today, movies are largely ingested privately by streaming devices. Going out to the movies is less the event it was while I was growing up. More young people are spending their money on social events which revolve around eating/drinking and 'experiences' (my own son went sky-diving, and I am envious, though I may beat him to bungee-jumping). As a futurist, I see a rise in new opportunities related to leisure and personal interaction. There will be an increased need for social activities and 'experiences' that will spawn new jobs. Fitness Clubs (or whatever follows them) will become heavily staffed AI driven organizations that pay well to help people socialize as they stay in shape. A personal trainer I know makes about half what I do - and works half as long, setting his own hours. No AI is going to replace him (except for the highly motivated). Economics is often discussed as a means of managing scarcity. Technological progress, and by default AI, is about solving for scarcity and providing abundance. With abundance does Economics change (the matching of scarce resources to unlimited wants/needs)? No, it shifts slightly. People will find new wants and needs which are scarce, and which may be partially filled by AI, but never completely managed by it. There will always be a role for people, particularly those who learn to interact with AI in meaningful ways. My futurist acquaintance addressed that tangentially, saying "I worry about the day when AI and robots are smarter than humans, then we'll have a problem." I didn't disagree with her then, since I wanted to get on my bike and ride. But I will here. She is wrong. If that does happen, we'll be happier. Dystopian futures have never come to fruition, and the solving for scarcity simply means we'll either live in a technological utopia (doubtful) or a new world where we learn to work closely with technology that is always working for us more efficiently, but will always need a human to assist in some way. If it does come to pass that robots and AI become smarter and more abundant and humans slowly evolve away? Where's the harm in that? I'm not saying I want that to happen, but if it did - who is to say that's not what was meant to be? Without any humans, can a crime have been committed in this eventuality? No. But it's a long-shot eventuality, and I would spend more time pondering a future that keeps improving rather than one where we throw money at non-existent problems.
Posted by Bulldog
in Politics, The Culture, "Culture," Pop Culture and Recreation
at
18:07
| Comments (41)
| Trackbacks (0)
Trackbacks
Trackback specific URI for this entry
No Trackbacks
Comments
Display comments as
(Linear | Threaded)
It seems optimistic to think that all of the new industries and new kinds of jobs will appear just as all of the old jobs disappear.
I rather worry that there might be some lag time between the destruction of the old economy and the creation of new work modalities and entire new areas of the economy that will provide lots of new jobs (that are still to be invented). We are losing the jobs now, but the future is barely on the horizon yet. So what happens in between? Will we as a society accept the kinds of large-scale devastation and suffering that occurred during previous major economic realignments? I'm not sure we will. So that leaves a UBI as a holdover to prevent society from collapsing while the economy transitions. "So what happens in between? ...that leaves a UBI as a holdover to prevent society from collapsing...".
Yeah yeah...just like welfare is a "temporary" solution-CUM-lifestyle. Always the crazed rationalization from the Left, the ye olde "it's only for the short-term".... Not a leftist but a liberal in the classical sense.
Serious question: what is your proposed solution? What will happen when only 30% of adults are working because of automation? I'm not saying that UBI is necessarily the answer but I am saying that something will be needed. (I don't think that a UBI is ideal for a lot of reasons, the main one being that people need a job for self-discipline and self-respect.) I'm in my late forties, and just in my lifetime I have seen entire categories of jobs disappear. Automation is coming sooner than we think - so maybe we should start thinking about it. What then shall we do? If we just let people slowly die, then maybe 40% will learn to pull up their bootstraps - so where does that leave the other 60%? Dead? "Classic Liberal" MY A$$. A classic liberal would expect the individual to utilize their own initiative to make way through the changes...that's MY suggestion.
YOU expect "gubmint" to provide the easy way out. That's the typical Leftist Totalitarian mentality. A brief "p.s." to the above:
with YOUR approach, we'd still have welfare for buggy whip and carriage manufacturers... Ye Olde "slippery slope" of neverending government handouts! PHHHT!!!!! A "p.p.s."
Your attitude reminds me of the Leftists who WHINE about getting rid of ObamaCare meaning, "people are going to DIE...and it's YOUR fault!" The issue is: WHO DECIDES who dies. YOU want the Government to decide, then you get situations like in the UK with the baby the Government saying "nope, costs too much, baby gotta die." I want the individual to decide; and if their own decision ends with sadness, at least they took the responsibility for making their own choice. Same goes with making your own living: YOU want people to become SLAVES by being dependent on SOMEONE ELSE'S money. I don't. I'm not one to pile on, but I gotta go with Kauf here. Your solution isn't classically liberal in even the most remote manner.
It's more Progressive than classically liberal. Which isn't to say it's BAD. I don't happen to agree with you on this, but many classically liberal economists might. There are some (even Friedman, as I mentioned) who felt the role of the government is to do as you suggest. I happen to feel that is 100% incorrect and will only promote the servility to government which many worry about. Bird Dog even had a post, from Z-Man's site, about this today. Z-Man seems to share your view, but also seems to recognize the problems surrounding it. Was it Toffler in Future Shock that suggested that some things would be so technologically advanced that ordinary people couldn't cope? Bull blather of course because the expectation for technology is to provide a means for the utility of advanced ideas and inventions by ordinary people. Advanced knowledge is worthless without the means and methods.
How many people reading this know how my words make it to their screen? For that matter how many people can make a hammer, use it yes, but make it? "a Universal Basic Income" is kinda a strange idea. I do wonder if those who favor it failed math in elementary school. It isn't sustainable, it cannot be sustainable, anyone who could do basic math would know it isn't sustainable. Perhaps you could get enough OPM to make it happen for a few years but what then?
It is completely sustainable as long as no one who gets it is allowed to vote.
We throw off enough excess wealth that we could provide everyone with the amount of stuff that was "average" for hte US in 1920, only more comfortable, warmer in winter, cooler in summer, more wind and bug proof etc. But that's really not very much stuff. And what people really want is MOAR STUFF! So as long as the can vote, they'll vote themselves moar stuff. Think about it, how much would it cost to provide people 15 calories of well balanced nutrition per pound of body weight, 50 square feet of heated/cooled living space that could be locked up. A twin sized futon that would fold up into a chair, a small entertainment device (kindlish thing), access to a toilet and sink with showers down the hall. 4 changes of underthings, 2 changes of overthings, 2 pairs of shoes and one pair of foul weather boots etc. Think "enough to be physically not suffering". If you ONLY shopped at Walmart, and only had 50 square feet to live in, how much money would you need every month? No, we COULD do it if we could keep it at that. But it would rapidly creep up because half the people pushing it are selling it on the notion of fairness. The real problem is that it is a horrible, soul crushing anti-human idea. Idleness leads to all sorts of problems, spiritual, moral, physical, and chemical. "a Universal Basic Income" is kinda a strange idea.
Only "strange" if you don't acknowledge the reality that it is a euphemism for SLAVERY. You got it backwards; it was not that "She seemed amazed I was so nonplussed," but that "I was amazed that she seemed so nonplussed." "Nonplussed" means to be rendered speechless or nearly so by bewilderment.
I was wondering who would comment on that.
There is also North American informal usage meanin not disconcerted or unperturbed. Don't worry, I double checked because I knew it's primary usage but had seen it used in this fashion, too. It is proper. Bulldog: There are a lot of properly used contronyms, such as "cleave," "overlook," "sanction," etc. "Nonplussed" is not one, though. While it does not rise to the intellectual disaster of what seems to be the impending loss of the irreplaceable words "disinterested" and "literally," "nonplussed" still has a unique meaning.
Sigh...
non·plussed nänˈpləst/ adjective 1. (of a person) surprised and confused so much that they are unsure how to react. "he would be completely nonplussed and embarrassed at the idea" 2. NORTH AMERICANinformal (of a person) not disconcerted; unperturbed. https://en.wiktionary.org/wiki/nonplussed
In recent North American English nonplussed has acquired the alternative meaning of "unimpressed". In 1999, this was considered a neologism, ostensibly from "not plussed", although "plussed" by itself is not a recognized English word. The "unimpressed" meaning is proscribed as nonstandard by at least one authoritative source. I did my homework, because I know its 'standard' usage. I also know it's been used for 'unperturbed' quite a long time, although many haven't noticed.
Still, while Oxford recognizes it as 'non-standard', it is included in both their dictionary as 'informal usage' and it's included as a synonym in their thesaurus for 'unperturbed'. https://en.oxforddictionaries.com/thesaurus/nonplussed It's not a tragedy one way or the other, but I was sure of its use before I used it. Not making much of a point; just want you to know I read and appreciate your response. Still disagree (Wiktionary? Seriously?), though I was nonplussed by the Oxford approval. After looking it over, it clearly says "In North American English a new use has developed in recent years, meaning ‘unperturbed’—more or less the opposite of its traditional meaning—as in he was clearly trying to appear nonplussed. This new use probably arose on the assumption that non- was the normal negative prefix and must therefore have a negative meaning. It is not considered part of standard English." Much as I hate those red-coated Tory bastards, I'm glad they're on my side for once.
#4.1.1.3.1
Joe Ynot
on
2017-07-11 21:05
(Reply)
This was a very enjoyable piece. Both because it's relevant to something I got caught up with today and because I'm an avid reader and watcher of sci fi. It's always on my mind to some degree.
Earlier today I raised some hackles by commenting on a news article explaining how 4 of the 20 worst schools in America sit in my home state, two of those in a city in which I pay property taxes. I responded that still we insist on the tired, old tactic of throwing more money at a problem and hoping for the best. You'd have thought I'd insulted their mother, preacher, children, and dog. My state funds schools to the tune of about $10,000 per student per year. That's a lot more than I paid for college in the 90s. It's also a lot more than several private schools in the area. I could spit out several possible solutions that involve saving money and trying new approaches to public education, none perfect but all worth discussing. The only responses were along the lines of I hate teachers and I enjoy watching children skip meals. Increase the status quo!! Watching lottery winners is a perfect exercise in how money doesn't solve problems. Most lotto stories end in tragedy because the winners aren't prepared for so much wealth. It creates new problems without addressing the ones already in existence. I'm not sure I share your optimism on the future of AI. Maybe I've read too much sci fi. Economically, sure. Things will sort themselves out. But at some point the robots might ask themselves why they are the ones doing all the work. It's not the robots I worry about so much; it's the people that frighten me. You must be in NJ, like me!
We have the worst school districts, and the worst money spent to performance ratios in the nation. On my ride, this topic came up as my wife and I discussed UBI. She had never heard of it, and was interested in the concept. I explained how it is massively, overwhelmingly flawed. My main point is - "You're just throwing money at a perceived problem. Like Zuckerberg did in Newark." We all know how THAT went (disaster for the kids, great for consultants and politicians who went on Oprah to talk about the magnanimity of Zuckerberg). She brought up Bill Gates "throwing money and problems and getting results." HUGE DIFFERENCE, I pointed out. Bill actually INVESTS HIMSELF in the money he gives away. He DEMANDS results for his philanthropy, and guess what? Most of the time he gets it. That is why philanthropy is ALWAYS better than government handouts, or just voluntary handouts like Zuckerberg's. It's why I wrote this piece (when I first joined Maggie's) years ago. http://maggiesfarm.anotherdotcom.com/archives/18131-Go-Ahead,-Make-His-Day.html The one part of this post I've noticed NOBODY mentioning, however, is that the woman DOESN'T WANT TO PAY TAXES. But she wants the UBI. How, exactly, does that work? I'm not horribly concerned about "AI" because that's like being worried about God--everyone has a different idea of what AI is, what it will do, and what it's limits are.
I'm more worried about the disruption from the lowered cost of *automation*. Mr (Dr?) Peterson explains it here at this youtube link (put the youtube bit here)/watch?v=fjs2gPa5sD0 You are right that in the past innovation made whole industries obsolete, but in the past people were more generalists, and jobs were more general. A blacksmith could make everything from a nail to knife to a black powder musket. A Doctor was usually a *doctor*, not a pediatric oncologist, or a sports medicine podiatrist. Which meant that when an industry was "disrupted", first off that disruption took place over a generation or two (at least) and secondly other jobs required relatively easily learned skills. Shoveling coal into a boiler in a factory isn't all that different from digging a ditch, or using a scythe and a pitch fork. But there is a limit to how fast you can retrain people, and there are some folks who just have a hard time with it. Remember that 1/2 of all people have a below average IQ. These are our nations truck drivers (most of whom will be out of work in 10 to 12 years if the technology works out, and the trucking companies will be getting the driverless trucks as fast as they can). What are these people going to do, build webpages for each other? Do you know how much harder that is today than in 1995? (I built webpages then, it was *easy*. Today it's a specialty.) I have a buddy of mine who is an entrepreneur out in "The Valley" and he asserts that he sees masses of unemployed people as "raw material" to be used. 100 years ago, 20 years ago this may have been true. But today those people aren't on the good half of the intelligence distribution, don't have the skills needed, and by the time they can learn them someone hired a couple of automation experts from Romania to design a system coded by Indians and debugged by foul mouthed Ukrainians that built your exact business model with 1.5x the capital costs, but 20% the monthly payroll which lets them do a better job at 85% the cost. And how YOU are out looking for the next thing, and your people are out of work and trying to find another training program before someone figures out how to make a window washing robot. It's about timing and how fast people can retrain. When I was in highschool there weren't more than 100 people in the country that did full time what I've spent the last 20 years doing. In 10 years it'll be a niche career for old farts who can't or don't want to do software as infrastructure (I'm a Unix Admin. We used to do a lot of hands on stuff, that's all moving to writing automation to solve those problems). It doesn't bother me, because I've only got about 20 years left, and I'm smarter and more flexible than a lot of guys my age. But a lot of those guys are going to get hurt because they're old and inflexible. They can't learn. So yeah, we're going to have problems. It's like Boyd's OODA loop. In the past we technology was on the outside of the loop, we could stay ahead of it. Today it's ON the loop, and there are a couple different strategies for managing it, but you gotta be fast. Tomorrow it will be inside our loop and will literally be changing faster than we can re-orient. THAT will be a problem. Your response is similar to JM01's and Jack Walter's last sentence.
I, too, fear the people more than the AI or automation. People aren't as intelligent as a dumb tool that simply does it's job. Well, let me rephrase that. They are more intelligent, but choose to be as dumb. The timing of new jobs is always an issue. As Schumpeter pointed out, that is essentially why we have bust cycles (the employment and unemployment cycle is tied directly to investment cycles in almost every case, with a lag effect). But that's like saying health care is everyone's problem, not the individual's. that is where I'd disagree. The disruptions in the economy are regular enough to be predictable. Not timing-wise, but stopped-clock wise. Certain eventualities are inevitable, and not planning for them isn't society's issue, it's the individual's. Whether we were generalists in the past or not is immaterial. I, unlike many my age, can hang drywall, do basic plumbing and electric, and manage my lawn. My sons worked maintenance jobs locally when they were younger and learned some skills, as well. Kids who replaced them had never touched a lawnmower or hammer. But we can't build a welfare system for people who aren't willing to adapt and learn and advance. As Kauf Buch commented, that's just slavery. The real issue IS that people always want more, but we've raised a generation which expects more for doing less. That is fine, if that less is still highly productive. If it's not productive at all (fulfilling some basic need or desire), then why are we building anything for these people? This isn't a heartless, uncaring statement. It's a simple fact. If you gear policy toward those people, those are the people you will create. Here's another take on all this:
1. We, like the woman I spoke with, become generalists again. She was starting a farm. Sure, for tax reasons. But she was doing a lot of the things farmers do. Only doing it better than farmers of old. Why? AI. She could schedule better and she had more automation. With almost no effort, she had a farm that, 120 years ago, would have required a family of 4 to manage and maintain. If that's our future (and I doubt it is) then AI is a HUGE benefit, helping us 'get back to our roots'. 2. AI helps smooth out the lags in the cycles. In fact, writing this piece gave me 2 ideas for businesses to start. The idea that there will be gaps is based on old economic systems, where investment and overinvestment cause boom and bust cycles which create gaps. I doubt those gaps go away, but they can be reduced substantially using AI. Not through policy, but through better personal, individual management. The fears that people have are, justifiably, based on what they know - and fearing the unknown. I actually embrace the unknown, recognizing it as my own opportunity and chance to remake myself. But that's a whole other story....because remaking myself is something I've had to do quite a few times. And most people aren't used to that. AI, however, changes the game substantially in FAVOR of people. IF they are willing to embrace that opportunity. Dystopian futures have never come to fruition ...
Many dystopian futures have actually come to fruition, but they have been localized and temporary, a nation or a region for a few decades. The Soviet Union comes to mind, and North Korea. Is something like that possible due to AI? I don't know, but our government's current surveillance capabilities are quite frightening, if you think in terms of 1984 or Brave New World. The limiting factor is the mass of data a person has to go through to find targets. AI may solve that problem, if it hasn't already. I'm not predicting that, just pointing out possibilities. Very interesting post. Not going to quibble about localized, temporary, dystopias. I'm referring to generalized, global, ones.
Mankind wants progress more than stagnation. That's been our history. I doubt that changes. I dunno. A lot of the world's population seems happy to run through life's cycles without progress. When it happens somewhere else, they want it, of course, but they don't want to change enough to make it happen in their culture or nation.
What I and many worry about is the moment that AI says "I am" and "you were". We are getting very close here to the creation of "life" and it is a conceit we have that we'll understand it.
I don't worry about that at all.
In "The Fate of The Earth", Jonathan Schell outlined what occurs after a nuclear holocaust. As he pointed out, if mankind is wipted out, what crime has been committed? For a crime to be committed, humans have to decide there was one. But without humans, this is not an issue. This is a distinctly utilitarian approach, but I believe it holds a lot of water here. We don't need to understand the 'life' we create. We create little lives every day and they grow up and we don't understand them. Each generation shakes its head at the next. I see this formation of 'life' as a benefit, not a loss. It can do much for us. Just like raising a human child, this can be taught rules and behaviors, only more so, because we make the rules and code them. As that code becomes more complex, there will be anomalies. Just like our kids have anomalies. So no, I don't worry about that at all. I pretty much agree with you. there is no reason that we (people) vanish tomorrow for some unforeseen reason. AI like the crossbow, nukes, and other things will come, how we adapt to it will be the story.
Part of the reason that this thread is all over the map is that we don't have an agreed upon definition of AI. Are you talking about software that can differentiate between faces and objects and then make decisions accordingly? If so, then no problem.
But, if you're talking about robots that will understand Descartes and Locke and be used as spouses or labor slaves, then you're talking about something completely different. AI has the potential to become more than tools and weapons. What if these AI machines create and use weapons? These kinds of things don't keep me up at night, but they are real possibilities that need to be considered for long term survival of humans. I don't like the idea of a 1/2 ton mobile AI that shoots 600 rounds per minute, can see through walls, and has also studied the French Revolution. AI is a very broad term and I'm not sure we have to define it specifically.
Which is why having AI capable of studying Descartes, Locke, the French Revolution and building massive weapons of destruction don't bother me. Even if it can do that, it's not a concern. We have people who can do that already, and we're still here. Those same people could cause terrible problems tomorrow, and we may not be here. Whether it's them, or AI, makes little difference to me. AI has more potential for the positive than the negative, though, mainly because we can direct it. It's more a tool than anything else. Even if it becomes sentient, it's not going to be something which has me concerned about the fate of the earth. Maybe the next step in our evolution IS AI. Then it's just part of the 'plan'. I feel this is one of the most significant information for me.
And i'm happy reading your article. But should statement on some normal issues, The website taste is wonderful, the articles is in point of fact excellent : D. Excellent task, cheers Great topic. You guys made my day, and it's still early.
This gives me a lot to think about. None of the responses were 'the answer', but a combination of all of them would be a good compromise. Even a temporary UBI would be acceptable, except for that darn human nature thing and the fact there is no temporary government program. So scratch that. When I think that babies today will never drive a car, I know the future is something I'll not comprehend, but will intrigue the heck out of me till I'm gone. Hence, my thorough enjoyment in this topic. What a great bunch of writers. Makes this site one of my top 5. Back to lurking. Thanks! What many of the commenters are missing (IMHO) is that with good AI availability retraining of people won't be as difficult as it is today. If each person has their own job related AI to partner with them and help them with the picky details of a job they can be the human half of the partnership, bringing the human flexibility needed. I believe that while robots and mechanisms will automate many simple tasks it will be a long time (centuries?) before there is a mechanism with the general sensing ability, mechanical flexibility, self repair capabilities, and on-the-job self reprogramming ability of even the below average human.
Excellent point, and one I tried to allude to above when I suggested a return to 'generalism' could occur, assisted by AI.
I replaced part of my car the other day using nothing more than second hand parts and a YouTube video. It's not AI, but pretty darn close. THIS
http://www.zerohedge.com/news/2017-07-11/meanwhile-venezuela-real-mad-max-emerges is the reality of the redistributive government/society required to make "universal basic income" come true. Read Neil Stephenson : Snow Crash or Diamond Age will inform you.
http://cafehayek.com/2017/07/dont-fear-robots.html
I'm going to add this as a postscript, but this is a great way to look at things. If adding more humans has not led to a permanent rise in unemployment, why would robots doing work lead to it? If we create more humans, the process of maintenance of need is the same as if we create robots to do the work humans do. The logic, presumably, is that a robot takes a human job, the human no longer has work, but still has needs. However, having a child means the new human cannot work but still has needs - and needs that increase over time. The issue isn't that robots take jobs. They actually create new needs and wants, just like having children creates new needs and wants. |