Tuesday, August 23, 2016

Robin: a car concierge

Almost four years ago, when we were working in the Smart Living Room project, I noticed a start-up called Robin Labs and asked: How different is a car assistant from a living room assistant?

(From TechCrunch in 2012
http://techcrunch.com/2012/09/19/magnifis-debuts-an-upgraded-robin-the-kitt-like-android-virtual-assistant-app-for-drivers/)


Now, I recently saw a blog post from Robin Labs  that says something very sensible, that the husband has been saying for a while.

Their blog post (excerpt below) describes four types of `bots':

  1. App-bots - that sounds like an apt name for those micro-apps dressed up as messenger contacts, typically addressing long-tail use cases such as ordering pizza or checking flight schedules - needs that could as well be met with a native app (assuming you managed to get people to actually download one). More importantly, these use cases are not necessarily conversational by nature. [..] they are often better off with standard visual UI element such as menus or buttons. Unless, of course, they rely on voice for input - then, see (4). Bottom line, app-bots are more apps than bots, in the traditional sense of the word. 
  2. Content bots - such as Forbes or CNN bot, for instance. These guys are really content distribution channels, they are all about push and are hardly ever conversational, but can sometimes support basic keyword search. In theory, a dialogue-driven newsbot could make an interesting product, but nobody has really nailed it yet. 
  3. Chatbots - i.e., genuine "chat bots", where the chat medium is in fact key to the experience, namely, where verbal communication actually helps get the job done. One popular use case is of course, customer service, which may very well be the killer app for chatbots. But, beyond run-of-the-mill customer support, we are seeing a surge in conversational concierge bots: from transaction-oriented services such as travel agents, to more casual assistance such as movie recommendations, to virtual friends, etc. Notice that, in principle, chatbots can be powered by either human agents or machines (or both).  Naturally, the trend is to eliminate or at least minimize the reliance on humans - to make the service both more responsive and more scalable. But, even when striving for a fully automated chatbot, one should not completely rule out a hybrid human-in-the-loop approach.
  4. Voice assistants - such as Amazon Echo, our Robin app, etc. - are essentially chatbots that use voice as the main/only communication channel, becoming very handy e.g., in the living room, in the car and other hands-free scenarios. Due to their reliance on voice, these bots have the highest conversational fluency bar of all other categories. As a result, they are the hardest to build, but can be genuinely useful when typing is not a good option - as evidenced by Amazon Echo's popularity. When the experience works, it does feel like the holy grail! 
Well, I wouldn't write it exactly like so, but totally agree that open ended conversation is very different from a bot that is supposed to help you solve a particular problem...

Anyways, they also have an awesome picture of Daleks, reproduced here for your delight.



Thursday, August 18, 2016

Lewis and the mysteries of A*

Some three weeks ago we had the pleasure of a visit by Mike Lewis, from Washington University, originally a student of Mark Steedman in Edinburgh.

He came to Nuance and talked about his A* super efficient parsing system that he talked about at ACL in San Diego. I really wanted him to talk about his older work with Mark, on Combined Distributional and Logical Semantics Transactions of the Association for Computational Linguistics, 2013, but if someone is  nice enough to come and talk to you, they may choose whatever they want to talk about. at least in my books.

And besides people in the Lab were super interested in Mike's new work. Mike is a great speaker, one of those that give you the impression that you are really understanding everything he said. Very impressive indeed! Especially if you considered how little  I know about parsing or LSTM (long short-term memory) methods. But the parse is publicly released, everyone can find it in github

 There's even a recorded talk of the presentation I wanted to hear, Combined Distributional and Logical Semantics, so altogether it was an splendid visit. When discussing other work in their paper, Mike and Mark say about our Bridge system: 

'Others attempted to build computational models of linguistic theories based on formal compositional semantics, such as the CCG-based Boxer (Bos, 2008) and the LFG- based XLE (Bobrow et al., 2007). Such approaches convert parser output into formal semantic representations, and have demonstrated some ability to model complex phenomena such as negation. For lexical semantics, they typically compile lexical resources such as VerbNet and WordNet into inference rules—but still achieve only low recall on open-domain tasks, such as RTE, mostly due to the low coverage of such resources.' 

I guess I agree that the resources we managed to gather didn't have the coverage we needed. More, other resources like those, are still needed. We need bigger, more complete, more  encompassing "Unified Lexica" for different phenomena. and more, many more languages. But I stop now with a very impressive slide from Mike's presentation.



Wednesday, August 17, 2016

Feferman's Farewell

 I was super sad to hear that we lost Professor Sol Feferman on July 26th, 2016. This week WOLLIC is happening in Puebla and Ruy asked me if I wanted to say a few words about Sol in a special session due to happen today in his honour.

I knew I would be busy at the time of the session, as seminars in Nuance Sunnyvale are on Wednesdays at  11 am, so I said I couldn't do it. Ruy then suggested  recording a tribute, so I decided to try it.

 I looked through many emails to, from and about Sol. and I looked at papers and reports and I managed to write a short text. Not as short as I wanted it to be.  when  recorded it it came to 12 minutes, instead of between 5 and 10 minutes that I had aimed for.  I even managed to get to grips with quickmovie (ok the only thing you need to discover is where the button to record something is...) and I recorded my message. Only to send it and discover that the programme had been changed at the last minute and the session in Sol's honor had already happened. oh well.

Here's my tribute to Sol and  Anita Feferman.  Grisha Mints and Bill Craig also show up a little. We're definitely getting poorer!

Semantics: Distributional and Compositional. Dudes and PROPS

(I haven't posted any thing in a long while, the stuff is accumulating in a hazardous way. Today we had Gabi Stanovsky visiting and his talk was great, and it reminded me of posting this.)

There is by now a great deal of literature on the deep problem of unifying distributional semantics (in terms of vectors and cosine distances) and logical or compositional semantics (in terms of negation, conjunction, disjunction, implication, etc.) Because it is an interesting and very topical problem (several of the people involved have sold multi-million dollar companies, for example) several groups have tried to crack the problem, with different theories.

The vision paper, explaining why we need "distributional semantics" as well as "logical semantics" is Combining Symbolic and Distributional Models of Meaning,   by Pulman and Clark. only 4 pages and well worth reading!

Then I  made a list of a few other papers that caught my attention and that might indicate a way forward for what I want to do.  My list:
1. Combined Distributional and Logical Semantics, Lewis and Steedman, 2013.
2. Transforming Dependency Structures to Logical Forms for Semantic Parsing, Reddy et al, 2016.
3. Flexible Semantic Composition with DUDES, Cimiano, 2009.
4. Getting More Out Of Syntax with PROPS, Stanovsky et al, in arXiv on 4 March 2016.

These two last papers form a side trip from the main concern of merging distributional semantics and logical semantics, but are still about meanings. The DUDES is fairly short, old (2009) and the author seems to be more concerned with lexical resources nowadays. The PROPS paper is longer and seems much more useful to my goals. (also, isn't props a great name?)

The basic  ideas of the paper  seem to be:

1. NLP applications often rely on dependency trees to recognize major elements of the proposition structure of sentences.
2. many phenomena are not easily read out of dependency trees, often leading to ad-hoc heuristic post-processing or  information loss.
3. they suggest  PROPS – an output representation designed to explicitly and uniformly express much of the proposition structure which is implied from syntax.
4. they also provide an associated tool for extracting it from dependency trees (yay!!). 

(Project page at PropS -- Syntax Based Proposition Extraction, with online demo.
code in GitHub (GitHub - gabrielStanovsky/props: PropS offers an output representation designed to explicitly and uniformly express much… ) requires python and java 7.

Their desiderata:
a. uniformly represent propositions headed by different types of predicates, verbal or not.
b. canonicalize different syntactic constructions that correspond to the same proposition structure
c. decouple independent propositions while clearly marking proposition boundaries
d. "mask" non-core syntactic detail, yielding cleaner compact structures.
e. enable simple access to the represented propositions by a uniform graph traversal.

Their design principles: 
a.  Want to mask non-core syntactic detail:  
    - remove auxiliary words and instead encode their syntactic function as features; 
    - group atomic units (such as noun compounds) within a single node
b. Represent propositions in a uniform manner (verbal and adjectival)
c. Canonicalize and differentiate syntactic constructions: 
   - Unify the representation of propositions which are semantically equivalent;
    - Differentiate syntactically- similar, yet semantically-different, constructions.
d. Mark proposition boundaries
e. Propagate Relations: every relation which is inferable through parse tree traversal (for instance, through conjunctions) should be explicitly marked in the representation. 

Their output format:
1. similar to dependencies, BUT
2. Typed nodes: (1) Predicates, which evoke a proposition and 
   (2) Non-predicates, which can be either arguments or modifiers.
3. simplify the graph structure by allowing multi-word nodes (e.g., Barack Obama), versus having each node corresponding to a single word in dependency trees.
4. resulting structures are no longer limited to trees, but are DAGS.
5. a label set of 14 relations (compared with approximately 50 in Stanford dependencies) 

I need to check how Bridge/XLE deals with the pair: The director who edited ‘Rear Window’ released Psycho” and
Hitchcock, who edited ‘Rear Window’, released Psycho”. Need also to check and mark what  they call raising verbs?
They say [...]``we heuristically use a set of approximately 30 verbs which were found by (Chrupała and van Genabith, 2007) to frequently occur in raising constructions. For these verbs do not produce a proposition." Seems sensible to me and I don't think we did this in Bridge.


Evaluation:
MCTest corpus for machine comprehension (Richardson et al., 2013), composed of 500 short stories, each followed by 4 multiple choice questions. The MCTest comprehension task does not require extensive world knowledge. Focus on questions which are marked in the corpus as answerable from a single sentence in the story (905 questions followed by 3620 candidate answers). Richardson et al (2013) introduce a lexical matching algorithm, which they adapt to use either dependency or PROPS structures, both obtained using the Berkeley parser. (numbers show the progression expected, but still low).

Sunday, July 3, 2016

Proof-Theoretic Semantics for Non-Philosophers

`Proof-theoretic semantics is an alternative to truth-condition semantics.' writes Peter Schroeder-Heister in the Stanford Encyclopedia of Philosophy and he ought to know, since  he created the label for the  idea in 1991.  He  says that proof theoretic semantics is part of the tradition according to which `the meaning of a term should be explained by reference to the way it is used in our language'. Meaning-as-use, is the slogan for that, which somehow sends people to Wittgenstein and his `language games'. 

This post is merely a summary of the first part of Schroeder-Heister's  Stanford encyclopedia entry on proof-theoretical semantics. Why bother then? I  almost hear you ask. Well, I couldn't find a short account that I liked. This is not it, definitely, but it should work as a marker to continue looking. 
Also I have found many interesting articles both in Prawitz'sSchroeder-Heister's  and in Girard's webpages that might be it. But no time to read them now, with only a week before the course at NASSLLI.

Mathematicians might want to think of it as a generalization of proof theory  starting from Gentzen's work (Investigations into Logical Deduction). Gentzen famously remarked that the introduction rules in his calculus of natural deduction could be said to  define the meanings of logical constants, while the elimination rules can be obtained as a consequence of their definition.

While Hilbert is credited  with the revolutionary idea that proofs should be studied as mathematical objects themselves, Prawitz (1972)  made the original distinction between `reductive' proof theory and `general' proof theory.  While Hilbert-style “reductive proof theory”,  is the “attempt to analyze the proofs of mathematical theories with the intention of reducing them to some more elementary part of mathematics such as finitistic or constructive mathematics”, in general proof theory “proofs are studied in their own right in the hope of understanding their nature”

Meanwhile Kreisel proposes  to explain  proof theory "from a neglected point of view. Proofs and their representations by formal derivations are treated as principal objects of study, not as mere tools for analyzing the consequence relation.” (Kreisel, 1971).  Kreisel focuses on the dichotomy between a theory of proofs and a theory of provability, while Prawitz concentrates on the different goals proof theory may pursue. However, both stress the necessity of studying proofs as fundamental entities by means of which we acquire  mathematical knowledge.
Thus in general proof theory we are not only interested in whether B follows from A, but in the ways by which we arrive at B starting from A

Most forms of proof-theoretic semantics are intuitionistic in spirit, which means in particular that principles of classical logic such as the law of excluded middle or the double negation law are rejected or at least considered problematic. The main tool of proof-theoretic semantics, the calculus of natural deduction, is biased towards intuitionistic logic, in the sense that the straightforward formulation of its elimination rules is the intuitionistic one. Classical logic is only available by means of some rule of indirect proof, which, at least to some extent, destroys the symmetry of the reasoning principles. Of particular importance is its functional view of implication, according to which a proof of A → B is a constructive function which, when applied to a proof of A yields a proof of B. This functional perspective underlies many conceptions of proof-theoretic semantics, in particular those of Lorenzen, Prawitz and Martin Löf. 

Natural deduction is based on  five major ideas:
  • Discharge of assumptions: Assumptions can be “discharged” or “eliminated” in the course of a derivation, so the central notion of natural deduction is that of a derivation depending on assumptions.
  • Separation/Modularity: Each primitive rule schema contains only a single logical constant.
  • Introduction and elimination: The rules for logical constants come in pairs. The introduction rule(s) allow(s) one to infer a formula with the constant in question as its main operator, the elimination rule(s) permit(s) to draw consequences from such a formula.
  • Reduction: For every detour consisting of an introduction rule immediately followed by an elimination rule there is a reduction step removing this detour.
  • Normalization: By successive applications of reductions, derivations can be transformed into normal forms which contain no detours.
Lorenzen (1955) introduced the idea of inversion principle which says that everything that can be obtained from every defining condition of A can be obtained from A itself.
von Kutschera (1968) introduces “Gentzen semantics”, a semantics of logically complex implication-like statements A1,…,An → B with respect to calculi K which govern the reasoning with atomic sentences. 
----



Sunday, June 19, 2016

Nominalizations and Zombie Nouns

I was in Stanford for some of the DELPH-IN meeting and we had the traditional conversation about nominalizations once again.



Is it useful to connect `destruction' with `destroy'? Should this be a lexical resource? How? Should we classify them (or some) nominalizations or deverbals? Which?

I have been working on this stuff for a while. First with Olya Gurevich (and Dick and Tracy), then  in Portuguese, with Livy Real (and Claudia and Alexandre). 

It looks like there's still plenty to do.

But Knowledge Representation aside, as Helen Sword says, nominalizations are `zombie nouns'.

Persepolis seems beautiful, even after destroyed by Alexander in 332 B.C.
 (By the way Alexander  the Great also destroyed Thebes and Tyre, the last one by building a bridge causeway!!!)

Saturday, March 12, 2016

Late with Women's International Day 2016

Someone I don't know personally yet (Evangelia Antonakos)
posted this in the Facebook group Women in Logic
List of women in mathematics - Wikipedia, the free encyclopedia
and guess what? I am there!!!

As Grouch Marx would've said things are really wrong if I'm invited to this party. Still a party is a party, might as well enjoy it.

Thank you Evangelia, this was a super present!