Share and discuss this blog

Tuesday, August 30, 2016

Chicken AI

My wife has acquired a flock of 8 chickens. As a city boy, I find this weird, but she is a farm girl and she loves chickens. I don’t mind the fresh eggs anyhow.

So, I found myself wondering about chicken intelligence. As I look at the chickens, I find myself curious about building a chicken AI. There has been so much over hype about the imminent AI taking over the world, so I thought about what we might have to do that to build a chicken AI, that is, a computer that was as smart as a chicken.

There are 8 chickens on our land. One avoids all the others. Let’s call it the autistic chicken. My wife informs me that there is a dominant chicken. One of the perks of being the dominant chicken is getting to sleep on the highest rung of the coop. (I don’t know why this matters to the chicken — this never came up in Brooklyn when I was growing up.) The chickens like to hang out together  (except for the autistic one). Why I come by they all try to be close to me., I don’t know why. I am not feeding them  so it has nothing to do with food. I do make chicken noises at them, and they make them back until I go away.

So, what would be going on in the mind of a chicken? 

From Modern Farmer: 

Purpose of the Pecking Order

Pecking order rank determines the order in which chickens are allowed to access food, water, and dust-bathing areas. It determines who gets the most comfortable nesting boxes and the best spots on the roosting bar. The good news is that, at least among a flock of chickens born and raised together, the pecking order is established early on and the birds live in relative harmony, with only minor skirmishes now and then to reinforce who is in charge.

The chicken at the top of pecking order has a special role to play in the flock. Because they are so strong and healthy, it’s their responsibility to keep constant watch for predators and usher the others to safety when a circling hawk appears or a strange rustling is heard in the bushes nearby. The top chicken is also expected to be an expert at sniffing out food sources, such as a nest of tasty grubs under a fallen log, or a bunch of kitchen scraps that the farmer dropped on their way to the compost pile. Even though the top chicken has the right to eat first, he or she usually lets the others feed, while keeping a vigilant watch for predators, and dines only after everyone else has had their fill.

Certainly a chicken has clear needs and it tries to figure out a way to get what it wants. This is a simple idea but one that is way beyond any AI program that is out there today. No one told the chicken which needs to have. It just has them. It wasn’t programmed by a human to have those needs. And, no one told the chicken how to get its needs fulfilled. And, no one told the autistic chicken to avoid all other chickens most of the time. So while you can get a computer to do these things, what we cannot do, at least not yet, is to get computers to generate their own needs and to figure out ways to fulfill them. The reason for that is that thinking requires observation, copying, the weighing of the pros and cons of various behaviors, and learning from the results.  And you also have to figure out when the pecking order has been established and it is time to give up trying to better your position. 

Chickens clearly have a rich sensory system, so much of what they “figure out” would come from what they see and feel. When ELIZA responds to someone saying they are feeling sad with “why are you feeling sad?” it is not relying on much cognitive power. A person who know who he or she was talking with, would sense the sadness and possibly be able to know where it was coming from, so it might respond “ you have simply got to get over her.”

The lesson here for AI is simple enough. The chickens are reasoning from data but probably not using statistics to do it. The chickens can copy behaviour and they can try things out to see if they work and then know what has and has not worked and get smarter from experience. Awareness of the world around you and sensing who might do what, matters a great deal.

Will AI be able to do this someday? I assume so. We are not there yet however.

The chickens come to visit my foot: 



The autistic chicken: 


Tuesday, August 9, 2016

AI is everywhere. Just ask anyone.

This was on the John Oliver show this week. As he points out, this is pure nonsense:




video



Of course, we could just note that the speaker is just ignorant about what AI means and is really talking about machine learning. Here is an excerpt from the Wikipedia entry on machine learning:

Machine learning is closely related to (and often overlaps with) computational statistics; a discipline which also focuses in prediction-making through the use of computers. It has strong ties to mathematical optimization, matrix theory, linear algebra, and copulas, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible. Example applications include spam filtering, optical character recognition (OCR),[5] search engines and computer vision. Machine learning is sometimes conflated with data mining,[6] where the latter sub-field focuses more on exploratory data analysis and is known as unsupervised learning.[4]:vii[7]


But can statistical approaches to text processing do what the speaker suggests? Yes. It is called search and journalists already do this. Could a computer automatically do search? I am not even sure what that means. A computer can’t do anything that it hasn’t been told how to do and for which clear parameters haven’t been specified. Getting the computer to “automatically” find photography doesn’t sound so difficult. And it is quite easy if you are looking for a picture of John Oliver for example. Or is it?

Here are some pictures that appear on Google images search for “John Oliver.” 











How would the machine know which of these pictures are appropriate for a newspaper to use? It wouldn’t.

As AI heads for its inevitable winter due to over promising, I am hoping for Global Warming (of investors.)






Monday, August 1, 2016

Who is ruining our schools? Sorry to tell you: it is you

I was interviewed by El Pais (the newspaper in Spain with the highest circulation) last week, I proposed the usual things about education that I tend to suggest. Eliminate the 1892 curriculum, stop teaching algebra. Eliminate classrooms. Let kids learn whatever interests them. Move to a virtual model where any kid who wants to learn to be a fireman or a doctor or an aerospace engineer could do so (in a virtual world working with the kids who have the same interests who may not live near by and working with a human mentor who can answer their questions and give them help.) I have proposed this kind of thing many times before, most recently in my book: Make School Fun.

The interview is here (of course it is in Spanish):


The reaction was the usual as well. Twitter was alive with people who loved what I had to say, people whose kids hate school, and people who know that it is politicians and publishers/test makers who oppose all change. Math teachers think I am an idiot, which is usual despite the fact that there exists no evidence whatsoever that learning algebra helps you learn to think. Working at anything that required careful reasoning would teach you to think. Math should be taught in context as needed. Build a bridge and you will learn to think. You may need to learn some math as well. Why do we jam this stuff down kids heads despite their interests? It never stays there. No one remembers algebra who fails to use it regularly and very few of us use it at all.

I also, because I like to make trouble, decided to attack Don Quixote. Every Spanish child learns this book. The reporter was upset with me on this one too, because she said it was basic cultural knowledge of Spain that everyone has to have. I asked if that were true in Mexico where everyone has to learn it too, so she stopped pursuing it.

But the attacks online on this one were multiple and very revealing. I responded to one (always a bad idea) by guessing that you could ask any Spanish worker and they would have forgotten the Cervantes that they had been forced to read. The response was that every Spanish worker knew a quote about honey and the mouth of an ass. I asked why this mattered and was told that it taught one how to deal with one’s bosses and customers.

Maybe it does. But here is an idea. Why not teach business instead of literature (to students who are interested in business)? I have learned through living that “neither a borrower nor a lender be” is a very accurate observation. But I didn’t learn it from Shakespeare. I know the quote, but I learned its truth though mistakes I have made (which is pretty much how you learn everything.)
Who is it that keeps insisting on what everyone must read and everyone must know and every course that everyone must take? I have come to realize that although the government enforces this stuff, the real culprit is us. By us I mean the people likely to be reading what I have written here. Intellectuals believe that everything they were “exposed” to in school is valuable in some way. They believe this because they cannot be caught saying: “Shakespeare? Who is that?” 

We intellectuals live in a  world where passing knowledge of Dickens and Thoreau is considered normal, and even if you are a professor of Computer Science, you would be expected to know something about that stuff despite its lack of relevance for your daily life.(Remember that relevance comes after experiences. I didn't care about Dickens until I had lived some. I hated Dickens when I was forced to read to him at 12 and loved him at 30 when I had a sense of what the real issues were in life.)

Also, we forget is how much is not taught if the government considers it inconvenient.

When I am in Spain, I like to point out that nowhere in their history classes do they mention that they wiped out every single Native American in Uruguay which is today (intentionally) a completely white country. In Canada they fail to mention the massacre of the French so that the British could take Nova Scotia. (Leading to the escape of some French to Louisiana.)  The history we learn is meant to make us love our country. This is true for literature too. (Or at least love your language in the case of Americans having to read British literature.) The government wants to make sure its people are ready to lay down their lives for their great country. This gets worse in dictatorships, but it is a basic in democratic countries as well. 

Everyone must be “exposed” to this is the usual argument. Well, you were exposed to chemistry for a year. When was the last time you balanced a chemical equation? You were exposed to logarithms, can you explain how they are used? Exposure doesn't work. A school system run by intellectuals makes a population that is very “undereducated” to use Donald Trump’s term. 

This “under-education”, which is caused by schools that are jamming stuff down kid’s throats who resist it mightily, causes exactly what it was intended to cause. Many times in Washington, I have proposed fixing the education system and teaching people to think instead of to memorize stuff that aren't interested in.  I am very often responded to with: “but who would sweep the streets?” It has been every government’s plan to make school unappealing for the majority of the population so that they will do menial jobs. This was an explicit US policy in 1900 when we had factories we needed to staff. But today, it is just ridiculous. We can't afford to have the vast majority of the population incapable of engaging in anything more that   superficial thought. A population that has learned to hate school is one that is full of people who would rather watch TV than think.  


We don’t even bother to teach people how to raise children or how to have reasonable human relationships. Why don’t we expose students to child rearing, or getting along with others, or how to work, or how to start a business, or how to manage you own finances? Because we must expose them to Cervantes, or Dante, or to Moliere. Pick your country and you will get the intellectuals “must expose” argument. But all this exposure is not working very well. We are not producing the kinds of citizens who can engage in rational arguments and make good decisions about their own lives.

Monday, July 25, 2016

"John hit Mary" and other AI problems



With articles being written about AI constantly now, I feel it is time to think about the basics. How do you get a computer to understand a simple sentence such as John hit Mary?

When I first started working on computational linguistics (as it was called then), the linguists made clear that they thought this was easy to do on a computer. You just used a syntactic parser and identified the noun phrases and the verb phrases.

I thought this was absurd, just as absurd as the idea that AI is coming tomorrow to eat us all.

To explain why, let’s discuss this sentence. What happens when you hear it? You react in some way. You might think that this was spousal abuse and that the police should be called. But then, I could tell you that John is 5 and Mary is his mother. Then you might wondered if and how Mary punished him. Or, I could tell you that John is 50 and Mary is his mother. Now you are wondering about the police again and also about what is wrong with John. Or, I could tell you that Mary is 5 and John is her father and you would be wondering about his parenting skills.

Absent of any of this information you make assumptions. I could ask you what Mary was wearing or what color her hair was and you might very well have an answer, or at least a guess. We comprehend through visualization and imagination. No sentence makes full sense out of context and we rarely have the full context, so we imagine it. People are constantly figuring out the parts they don’t know for sure. We make mistakes all the time. That is how human comprehension works. Did I mention that John and Mary were each driving their own cars at the time? Did I mention that John is a blackjack dealer and Mary was playing blackjack? Perhaps John is a baseball pitcher and Mary was the batter. Maybe they are both boxers.

Sentences don’t mean much out of context but two things are true:

1. we never have the full context so understanders make inferences, draw pictures in their minds and attempt to do the best they can

2. computers, in order to do this effectively, would have to have what people have: a model of the world. 

When you hear about all the AI programs being worked on today it is safe to assume that they are not even thinking about building complex world models.

For fun, I typed “John hit Mary” into Google. The first thing that comes up is an article on empathy that discusses some of what I have been saying here. The second thing that comes up is an excerpt from a book of mine (Explanation Patterns) which discusses the belief structures underlying the comprehension of such a sentence.  

Since the media and many companies would have se believe that have solved the natural language problem, let’s consider a real example of “John hit Mary” . This is from the New York Post (July 25, 2016);

Cops smashed their way into Hollywood star Lindsay Lohan’s posh London flat after a furious bust up with her Russian lover.
Police were called in as Lohan, 30, suffered a meltdown on the balcony of her Knightsbridge apartment with boyfriend Egor Tarabasov, 22 – claiming she had been attacked.
Waking up neighbors, the A-lister shouted, “He just strangled me. He almost killed me.”
In footage taken by a neighbor she could be heard begging for help outside of her $4.2 million home at 5 a.m. on Saturday morning.
Shouting her name and address across the street, she screamed, “Please please please. He just strangled me. He almost killed me. Everybody will know. Get out of my house.”
She added, “Do it. I dare you again. You’re f—ing crazy. You sick f—. You need help. It’s my house, get out of my house.”
She was also heard shouting to Egor — who was also near the balcony, “I’m done. I don’t love you anymore. You tried to kill me. You’re a f—ing psycho,” before adding. “We are finished.”
She added, “No Egor, you’ve been strangling me constantly. You can’t strangle a woman constantly and beat the s— out of her and think it’s ok. Everybody saw you touch me. It’s filmed. Get out! Get out.”
Ten minutes later police arrived after receiving reports of a “woman in distress” – forcing their way into the property – only to find it empty after going inside.
They said no crime was committed and no arrests were made.

How would a computer understand this story? It would need a better model of the world than I have. I don't know much about Lindsay Lohan except that she was a popular child actor who now seems to be in trouble quiet often.

Reading this story, I wonder what is wrong with her. Why does she make such bad choices? How is a 22 year old Russian the right man for her? 

Then, also, I wonder why no arrests were made. Was she making it all up? And how about “you have been strangling me constantly?” Really? Why would someone put up with that? Sometimes, I know, poor women put up with abuse because they have nowhere to go. But isn’t she the rich one?

So, when I read this, I wonder many things about what is wrong with this woman, why her life went so wrong, why someone isn’t helping her, and what parts of this story have been left out.

A computer would need a deep model about why people do what they do, better than the one I have, because I am even having trouble with wondering why this is news. (Of course, it is the New York Post.)


When computers can tell me what the real issues are here and be able to enlighten me in some way about the  questions I asked, then I would be impressed. Not afraid this AI, simply happy to know that a computer could explain stuff to me that I don't have a good world model for, and hence don’t understand very well. It is the building of complex world models about why people do what they do that underlies all understanding. The AI being worked on today doesn't even attempt to solve that problem.

Tuesday, July 19, 2016

5 questions about human intelligence that make clear AI is far from here yet

I have some questions for you:

  1. How many windows were in the house or apartment in which you lived when you were ten?
  2. Can you name all 50 states? (For Europeans: can you name every country in Europe?)
  3. What was served at your birthday party when you were 13?
  4. When you came back from your first trip abroad, how did you describe the experience to your friends?
  5. What was the most difficult interaction you ever had with a teacher and what did you learn from that experience?



Why am I asking these questions? The popular world has suddenly become obsessed with AI. Venture Capitalists have become obsessed with funding AI companies. I thought it might be helpful if we discussed I (as opposed to AI) a little bit. You can’t really expect an AI to take over the world if it isn’t intelligent. Since the media are so concerned with this impending take over, I thought I would take a shot at explaining some aspects of intelligence in humans and the properties of human memory upon which it relies that AI will have to emulate to be intelligent.

So, question 1, how does one answer it? Actually the answer is pretty simple. You need to take an imaginary walk around your dwelling and count the windows. I always used this example in my AI classes. Why? Because taking an imaginary walk around a house requires a visual memory. We can remember what things looked like, imperfectly, typically, and can find the answer. There is nothing to look up. No data to search. No “deep learning to be had.” You simply have to look. But how simple is that? Can we create a computer that can walk around its own prior visual experiences? Possibly. But the computer would have to have remembered what it saw, not in terms of pixels but in conceptual terms. (“There was a green couch in the living room, I am pretty sure.’) So, memory is visual, but it is also reconstructive. We figure the couch had to have had an end table nearby but we don’t remember it, so we imagine it and attempt to reconstruct it. People get into arguments with family members over this kind of stuff because our memories are imperfect and we reconstruct in idiosyncratic ways. An AI would need to be able to do that. (Fight with its siblings? Yes.)

I asked question 2 in my classes every year. (Former students do you remember that? Maybe you do and maybe you don’t. Can you remember why I did it?) I did it because I was trying to explain the difference between recognition and recall memory. I can’t recall a student who could actually name all 50 states. (There my have been one or two). Mostly they got 47 or 48. They usually left out Utah or Idaho or Arkansas. Why? When I pointed out the states they had missed, no one every said: I never heard of that state. They knew the names of all 50 states but in order to name them they didn’t search the web to find the list. Modern AI’s would have that list. But modern AI’s aren’t really intelligent and don’t behave the way humans do.) They can just search lists. How do people do it? They “walk around” a map that they can visualize. They go down the East Coast and they go up the West Coast. They rarely miss any of those states. It is those darn middle states that cause all the trouble. Why? Because the maps that we have stored in memory are imperfect. Memory is very important in human intelligence but we are kind of bad at at. Does this mean AI’s will beat humans at memory tasks? They might. They probably could name all the states but they would do it very differently. They could win Jeopardy but not by doing what people do wen they recall information. Does this matter? Yes it does. I am getting to why.

Question 3. Why would anyone remember what one ate at a birthday party many years ago? You might not. Part of human memory is its ability to be selective about what it remembers. Not all experiences are equally important. We need to learn from the important ones and disregard the unimportant stuff. Can AI do that? Not that I know about. “Importance” implies that one has goals. These goals drive what we pay attention to and what experiences we dwell upon as we grow up. Oh, but modern AI’s don’t grow up. They just search, and store, and search some more. They don’t get wiser from each experience. And they don’t reconstruct. I have no idea what the food was at my 13th birthday party, but that was a big occasion in my world so I can guess. I really would guess badly because the food was not the issue, the party was. (And, I have pictures, but only of the bread and the cake.) My memory helps me figure out answers, but it does not provide them. My memory is full of experiences that I have to re-interpret every time. That is what intelligence is based upon, faulty memory. So, modern AI can make better memories perhaps, but of what — words in texts? My memory is based upon emotions.That was a big day for me. I remember cousin Joanie dancing. (Or was it my girlfriend Phyllis?). I remember my grandmothers kissing me. I remember my mother’s yellow dress. Memory is like that. (And since I am male it is not shocking that I remember the females, who always held (and still do hold) a fascination for me.)

Question 4. My first trip abroad, which lasted about a month, has maybe five salient memories. One was watching my mother do business in Austria and noticing that she had failed to notice something her competitors were doing that was hurting her. A second was driving around some of Eastern Europe by myself, a drive which included me passing a farmer in a wagon in Yugoslavia and feeling him hit my car with his horsewhip. (Maybe I wasn’t supposed to pass him.) A third was meeting a girl on the plane from Vienna to Tel Aviv simply because I asked her a question (in English of course) and she was ecstatic to find someone else on the plane to talk to. (Our relationship lasted all of two weeks, but I remember it.) A fourth (this was 1967) was seeing the Israelis already building settlements on the West Bank and me wondering how exactly doing that would lead to peace.) The fifth was my visit to Venice where I was hosted by a cousin who tested my “American crudity” by asking me to eat spaghetti, assuming that I would do it wrong and her being disappointed when I didn’t. (I grew up with a lot of Italians in Brooklyn.) Why am I telling you this? Because this trip lasted a month. I can remember a little more about it but not a month’s worth of stuff. I remembered stuff that caused me to learn something important  about business, about how to meet women, about international politics, and about things I still don't understand —e.g.  the farmer with the whip.) We learn from experience. Any serious AI program would have to do the same. Too bad what we mean by AI today isn’t even close to what I am talking about.

The last question is obvious. A good teacher makes you think. I had plenty of those. I also had one who hit me. I didn’t learn much from that except to stay away from her. As I write this, I am on the way to the 90th birthday party of my PhD thesis advisor (Jacob Mey.) All my interactions with him were difficult. From each one I came out wiser. I learned from being criticized and I learned from being told I was wrong. We argued. I learned. When AI programs do that, we will have AI. Until then, not so much.

Argumentation, goals, emotion, visualization, imagination, and reconstructive memory. Stop worrying about current AI programs. Or, start worrying about them. Because they sure aren’t doing those things.







Monday, July 11, 2016

Six things computers (and people) must do in order to be considered intelligent


We hear a lot about AI these days, most of it pretty silly. It seems all to be about answering questions by key word matching and finding ads based on search. To me, AI has always been a field at the centre of which was intelligence. Here, I will list 6 things that most intelligent people can do, that no AI program can do. While Hawking and Gates are very afraid of AI, I am very afraid that no one is working on the right problems in AI any more.



1. People can make predictions about the outcome of actions


So, I could ask a person: What do you think will happen if we keep having elections for President when a large chunk of the population doesn’t like either candidate?

This would start a conversation about the current election. It might lead to an argument. It might lead to a solution. Type this in to Google or Siri or Watson and see what you get. Hint: you get newspaper articles that match on some of the words. 

Conversation is a hallmark of intelligence. Any AI system must be able to have a conversation about a complex topic. All the “deep learning” that is going on is not focusing on that very simple test of intelligence.


2. People build a conscious model of a processes in which they engage

Here is something someone might say: I keep hearing about global warming. Should I be fearful? Isn’t this just threat for those who live in coastal areas? The climate has always been changing.

This question calls for someone who has a model of global warming both now and historically to respond to it. What “AI” could do that today? Who is even trying? (Hint: it is very hard.)


3. People find out for themselves what works and what doesn’t by experimenting from time to time.


Something someone might say: I wonder how likely it is that I would get a speeding ticket if I went 120 on I-95.


A reasonable response to this might be: Where on I-95? or Why would you want to take that risk? 

Find me the AI group that is working on helping people figure out how things will turn out if they try something new.

4. We are constantly evaluating things. We attempt to  improve our ability to determine the value of something on many different dimensions

One might say: I think she is in love with me. How do I know for sure?

Typically people respond to such sentence with stories of their own lives, of love that went right or went wrong. Find me a computer that tells you a story when you are worried about something in your own life. (I did work on this problem and still do. But you can be sure Microsoft’s AI group isn't working on it.)


5. People try to analyze and diagnose problems they have to determine their cause and possible remedies.


For example: My business has had flat earnings for two years now. Should I be worried? What can I do about it?

A normal person would try to find an expert to ask these questions of. I would like to have a computer expert to ask these questions of. Google responds with a four year old article from Atlantic magazine about buying a house:


I assure that any natural language processing program that Google is working on would not fare much better here.

6. People can plan. They can do needs analysis as well as acquiring a conscious and subconscious understanding of what goals are satisfied by what plans

Example: I am thinking of moving. But I am wondering what will happen to my relationships with the people who live near me.

When computers have stories to tell and can relate an experience or concern that a person has, to something it knows about and start a reasonable conversation with them then we will have AI. I would not be afraid of that AI. I would welcome it. But, unfortunately no one is working on this. Companies are saying AI constantly we and building up expectations in people that will not be satisfied unless and until the so-called AI companies work on these six problems (and many more.) 

These six problem underlie intelligence, artificial or otherwise. Time to think about intelligence and not Markov Models that make search better.


To summarize: Intelligent people have memories. They augment those memories through daily experiences and human interactions. They don’t have knowledge stuffed into their memories, instead they learn through attempting to achieve goals they inherently have and finding that the plans they tried need to be adjusted. They get help in the form of stories from other humans, told just in time. When computers can do all this, we will have AI. Right now, we have a lot marketing and hype.

Monday, June 27, 2016

attempting to understand Bob Dylan (just like all those big firm's Natural Language Processing programs claim they can do)







Suddenly, natural language processing (NLP) is back in the news. (Oddly this is a term I made up around 1970 because I didn’t like the previous term: computational linguistics.) I should be very happy that a field in which I spent a lot of time, having a resurgence, but I am not. People say they are working on NLP but they seem to universally misunderstand the problem. To explain the problem I will discuss the meaning of some Bob Dylan lyrics. (I chose these because IBM Watson ads chose Bob Dylan to be in their commercials and Watson summarized his work as “love fades.”)

I have selected a verse from a few of what I consider to be some of his most popular songs:


Blowin' In The Wind (1963)

Yes, and how many times must a man look up
Before he can see the sky?
Yes, and how many ears must one man have
Before he can hear people cry?
Yes, and how many deaths will it take 'til he knows
That too many people have died?


What do those lyrics mean? To me, this is a song about people’s insensitivity to the plight of others. It was written when the Viet Nam War was just beginning, and Civil Rights protestors were getting killed.

What would modern day natural language programs be able to get out of this verse? That he says “yes” a lot? That some people need more ears?

Let’s look at another verse from another song:


A Hard Rain's A-Gonna Fall (1963)

Oh, what did you meet my blue-eyed son ?
Who did you meet, my darling young one?
I met a young child beside a dead pony
I met a white man who walked a black dog
I met a young woman whose body was burning
I met a young girl, she gave me a rainbow
I met one man who was wounded in love
I met another man who was wounded in hatred
And it's a hard, it's a hard, it's a hard, it's a hard
And it's a hard rain's a-gonna fall.

What is this about? To me it seems to be about the hard knocks of life and is making the prediction that things will be getting even worse. Current NLP programs would see this as being about people, I assume, and maybe rain. Would any modern NLP program be able to understand the metaphor about hard rain or giving a gift of a rainbow? I doubt it. Yet, understanding metaphor, is critical to NLP since metaphor is everywhere. (This food tastes like crap.)

Stanford offers an NLP course (via Coursera.) This is what they say about it:

This course covers a broad range of topics in natural language processing, including word and sentence tokenization, text classification and sentiment analysis, spelling correction, information extraction, parsing, meaning extraction, and question answering, We will also introduce the underlying theory from probability, statistics, and machine learning that are crucial for the field, and cover fundamental algorithms like n-gram language modeling, naive bayes and maxent classifiers, sequence models like Hidden Markov Models, probabilistic dependency and constituent parsing, and vector-space models of meaning.

So, using a lot math you can figure out that a gift of a rainbow is about helping someone appreciate the beauty around them? I guess a Hidden Markov Model would do that for you.

Here are more lyrics from another song:

The Times They Are A-Changin’ (1964)

Come writers and critics
Who prophesize with your pen
And keep your eyes wide
The chance won't come again
And don't speak too soon
For the wheel's still in spin
And there's no tellin' who
That it's namin'
For the loser now
Will be later to win
For the times they are a-changin’.

Was Dylan speaking out against the Viet Nam War here? It seems to me he was asking the media to stop reporting on the war as a wonderful glory for the U.S. and start speaking up about its horrors. How did I figure that out? I read it, thought about it, and recalled its context. Nothing miraculous.(But, imagine any of these NLP program doing that!)  To understand you need to be thinking about what something means. Would your typical modern day NLP program think this was about prophesy, or losing?


Maggie's Farm (1965)

I ain't gonna work for Maggie's pa no more
No, I ain't gonna work for Maggie's pa no more
Well, he puts his cigar
Out in your face just for kicks
His bedroom window
It is made out of bricks
The National Guard stands around his door
Ah, I ain't gonna work for Maggie's pa no more.

This is a hard one to understand, even for a person. I saw it as a song about dropping out of the system. Here is what Wikipedia says about it:

The song, essentially a protest song against protest folk, represents Dylan's transition from a folk singer who sought authenticity in traditional song-forms and activist politics to an innovative stylist whose self-exploration made him a cultural muse for a generation.

On the other hand, this biographical context provides only one of many lenses through which to interpret the text. While some may see "Maggie's Farm" as a repudiation of the protest-song tradition associated with folk music, it can also (ironically) be seen as itself a deeply political protest song. We are told, for example, that the "National Guard" stands around the farm door, and that Maggie's mother talks of "Man and God and Law." The "farm" that Dylan sings of can in this case easily represent racism, state oppression and capitalist exploitation.

How would Microsoft’s NLP group get their programs to understand this? Here is what they say about themselves:

The Redmond-based Natural Language Processing group is focused on developing efficient algorithms to process texts and to make their information accessible to computer applications. Since text can contain information at many different granularities, from simple word or token-based representations, to rich hierarchical syntactic representations, to high-level logical representations across document collections, the group seeks to work at the right level of analysis for the application concerned.

In other words, since this isn’t a document, it is unlikely that Microsoft could do anything with “Maggie’s Farm” at all. Or, maybe my own ability to process language is off and they would get that the “farm” referred to the state’s exploitation of its own people.

Let’s try another:

Rainy Day Woman # 12 & 35 (1966)

Well, they'll stone ya when you're trying to be so good
They'll stone ya just a-like they said they would
They'll stone ya when you're tryin' to go home
Then they'll stone ya when you're there all alone
But I would not feel so all alone
Everybody must get stoned.


I have always liked this song because it says two different things at the same time. To me, it says that if you try do anything at all, someone will always be trying to stop you. It also says drugs are a good solution to dealing with all this.

Maybe Google knows how to deal with this kind of thing. Here is what Google says about their NLP work:

Natural Language Processing (NLP) research at Google focuses on algorithms that apply at scale, across languages, and across domains. Our systems are used in numerous ways across Google, impacting user experience in search, mobile, apps, ads, translate and more.
Our work spans the range of traditional NLP tasks, with general-purpose syntax and semantic algorithms underpinning more specialized systems. We are particularly interested in algorithms that scale well and can be run efficiently in a highly distributed environment.

Our syntactic systems predict part-of-speech tags for each word in a given sentence, as well as morphological features such as gender and number. They also label relationships between words, such as subject, object, modification, and others. We focus on efficient algorithms that leverage large amounts of unlabeled data, and recently have incorporated neural net technology.

On the semantic side, we identify entities in free text, label them with types (such as person, location, or organization), cluster mentions of those entities within and across documents (coreference resolution), and resolve the entities to the Knowledge Graph.

Recent work has focused on incorporating multiple sources of knowledge and information to aid with analysis of text, as well as applying frame semantics at the noun phrase, sentence, and document level.

So, they would probably get the second stoned reference, but the idea that people will try to prevent anything you might do for no good reason would be lost on Google.

Finally, one more song to contemplate:

The Boxer (1970)

  I'm just a poor boy
Though my story's seldom told
I have squandered my resistance
For a pocketful of numbles
Such are promises, all lies and jest
Still a man hears what he wants to hear
And disregards the rest.


I have always liked this song a great deal. But, I cannot tell you what it is about from looking at these lyrics. Here is the rest of it:

When I left my home and family
I was no more than a boy
In the company of strangers
In the quiet of the railway station
Running scared, laying low
Seeking out the poorer quarters
Where the ragged people go
Looking for the places only they would know.

Asking only workman's wages
I come looking for a job
But I get no offers
Just a come-on from the whores on Seventh Avenue
I do declare
There were times when I was so lonesome
I took some comfort there.

Then I'm laying out my winter clothes
And wishing I was gone, going home
Where the New York City winters aren't bleeding me
Leading me
Going home.

In the clearing stands a boxer
And a fighter by his trade
And he carries the reminders
Of every glove that laid him down
And cut him till he cried out
In his anger and his shame
"I am leaving, I am leaving"
But the fighter still remains.

Seeing the entire song makes it seem to me like a song about hope. But when you Google it you find out Dylan was very interested in boxing and that Paul Simon wrote this song  as a “dig against Dylan”.

Well, who knows? I don’t really care what these songs mean. But, oddly I can’t listen to them without taking meaning from them. A song resonates because you get something out of it that stays with you. It may not teach you anything. You may not learn anything from it. But you understand it as best you can nevertheless. To understand means to figure out what words mean in a context and what ideas they are trying to convey. Notice that “ideas” are never mentioned in the write ups I have quoted above. Google is not trying to figure out what ideas are being expressed but they do expect humans and computers to “merge” sometime soon (which mean people were suddenly a lot dumber.)

The hype about NLP these days is about Siri or other imitators that haven’t a clue what you just said but can respond with some words that may or may not be relevant to you.

It would be nice if all these research firms with piles of money to spend would work on the real NLP problem, which is figuring out how humans understand what is said to them and then automatically alter their memories accordingly. When we listen to someone talk, we attempt to discern what ideas they are trying to convey and then we grow in some small way from having participated in the conversation. To put this another way, NLP is really about learning and memory, as I said 35 years ago. Too bad that nowadays we only care about selling better ads to people or answering questions about where they can find a restaurant.

The times they are a changing.