The AI course I participated in online, taught at Stanford, recommended that Python be used for the homework. I believe Georgia Tech still uses LISP.
The fallacy here is "new" is "good". AI research is one of the oldest computing research disciplines. It keeps calving off subfields as people realize that techniques from it can be used elsewhere. Language Processing, Machine Learning, and Data Mining are all examples of "practical" applications that use a huge host of languages.
So it's less that the main field has changed than it has been refined into a massive array of related disciplines. It's much like saying "Scientific Computing" and expecting it to just mean solving Linear Equations.
The languages you've mentioned have evolved quite a lot on the last 20 or 30 years. Lisp spawned Common Lisp and Clojure. Prolog spawned Visual Prolog (it has objects...) and Mercury (take Haskell and Prolog, lock them in a room together...stand well away and get ready to run).
Given that AI research is more theoretic, it makes sense that it would focus on the theory (math) rather than the practicalities (languages).
All that being said, the biggest innovator of AI technologies I'd wager is Google. They pushtend to favor Python (and Go and Dart but that's beside the point). Thus I'd say Python is the "recent language of choice" but you could also use Haskell or OCaml or F# or C# or even Java.