10 Questions with Noah A S

noah2

Every group of friends has one person who holds the others together. In the world of Magic Leap, this person is Noah Aubrey Schiffman.

When the HoloLens first came out, the HoloLens team tried to create their own community website and forums. But people felt more comfortable hanging out in the HoloDevelopers Slack group that Jesse McCulloch created (and now Jesse works at Microsoft). When the Magic Leap came out at the end of 2018, a friend and I started a Slack group for it while others created a Discord channel to gather the community.

However, it was the twitter thread and #leapnation tag that Noah created which eventually became the gathering spot for MR developers, hobbyists and fans.

Why? you might ask. I think communities develop around people whose sincere enthusiasm reflects and reveals the common purpose inside the rest of us. In the world of magic leap, this hearth keeper is Noah, unofficial community ambassador to the magicverse, first of his name. Long may he reign.


What movie has left the most lasting impression on you?

Terminator II: Judgement Day (with the fear of Skynet) Or The Matrix (the idea of living in a simulation)

What is the earliest video game you remember playing?

It might have been something on an old-style Mac. Probably the game Sockworks which is for young toddlers.

Who is the person who has most influenced the way you think?

Probably my mother or a few of my friends.

When was the last time you changed your mind about something?

I do it a lot.. so I guess it was this week.

What’s a skill people assume you have but that you are terrible at?

ah a skill I don’t have that people assume I have.. Development, in something. It could be javascript I’ve not made much anything yet.

What inspires you to learn?

More learning, I guess, Isn’t it a cycle?

What do you need to believe in order to get through the day?

I don’t really need to believe very much I’m good when it comes to coping? Is this the question?

What’s a view that you hold but can’t defend?

It’s when I know something is coming or on the way but I signed an NDA so I cannot talk about it.

What will the future killer Mixed Reality app do?

– Something social! *Or* It will give you news! (doesn’t twitter do both?)

What book have you recommended the most?

Snow Crash.

10 Questions With Charles Poole

charles_poole

Charles Poole, the owner of IS Studios,  is currently one of the most experienced mixed reality developers in the business. Like many of the other people well known for their development chops on the HoloLens and Magic Leap One, he fell into it accidentally. Through a combination of determination and blind luck, as well as the ability to pick up a new UX paradigm that requires technical acumen with both .NET and Unity, he is currently one of those rare people with 3+ years of hands-on MR design, development and project management experience. You’ll have to ask him yourself for the full story, but it basically comes down – as with so many others – to getting his hands on a very expensive device and learning to make it hum (ideally  using spatial audio).

Charles is soft spoken and kind. One of the very interesting things about his background is that he is a mathematician – and so in that small subclass of software developers who actually knows math! There’s nothing nicer in the world of programming than having a friend you can hit up when you are having problems with an algorithm or with your matrix math.


What movie has left the most lasting impression on you?

Hackers, I think watching Hackers in 95/96 shaped my childhood and later choices when it came to education and what I spent my time on.

What is the earliest video game you remember playing?

Super Mario Brothers on the NES, or Sky Kid, also on NES. I remember playing it for hours just to get to the 3rd or 4th level, then watching my father get a lot further.

Who is the person who has most influenced the way you think?

Maybe Buckminster Fuller, Neal Stephenson maybe Michael Crichton. I read a lot as a child, I feel as though all the views I was exposed to through fiction and non-fiction had a big influence in how I see the world and approach problems. In general the problems seem really big, cause a lot of drama, are entertaining to read and experience, then the solution just happens to come together from a character that has the experience to pull a solution out of their ass.

When was the last time you changed your mind about something?

A big one recently, and kind of mild, was using Photon for the multiplayer aspects of my work. I was against Photon for a long time, I wanted to be in control of every aspect of what I was building. So I’d do things like make a custom socket server, write the server in Dark rift, use WebRTC. One of the most important things about freelancing is using every tool you have to accelerate development, while keeping it altogether. We had to make a decision recently about a multiplayer backend that could scale to thousands of users, but still be self hosted, and the time-frame was extremely compressed, so I revisited Photon, specifically PUN2 which had been released since the last time I had used PUN, and it felt like it had come a long way in the time since I had used it last.

Simpler and more personal – My daughter’s kindergarten teacher had been pushing for her to repeat kindergarten. I was staunchly against it, she was getting top marks, won the science fair over 5th graders, and with something she had actually done and came up with on her own, we only bought the materials. But she just wasn’t emotionally ready for the pace to get quicker in first grade, and her teacher made her excited about helping out for another year. So, we agreed to have her repeat kindergarten, because she loves to learn, and we didn’t want to make school into something she hated.

What’s a skill people assume you have but that you are terrible at?

Managing my time, I’m terrible at managing my time, I tend to get sucked into a project and neglect everything else. I would work every day from 9am – 9pm or later. I had to step back and put a rigid stop time on my day so I would spend time with my kids and not just work through their whole childhood.

What inspires you to learn?

I want to do everything myself, and push myself outside my developer comfort zone everyday. I’ll say ‘yes’ to things just for the challenge of figuring it out.

What do you need to believe in order to get through the day?

That things can only get better. I started off this dev journey making a thousand bucks a month, living in a tiny apartment with my wife and two kids. Every day, week, month feels like things have gotten better for us, at some point I want to turn around and help make other people’s lives better too.

What’s a view that you hold but can’t defend?

That anything is possible with enough hard work. I have an applied math background, and have seen sparks of insight and intuition I know I’d never have, but I still feel like I’d get there eventually if I put enough hours into it.

What will the future killer Mixed Reality app do?

Something agent based, an intelligent agent that acts as your exocortex. AI/ML is the future of Human Computer Interaction, the killer app won’t feel like an app, it will just be part of your life.

What book have you recommended the most?

Rainbow’s End by Vernor Vinge, it’s shaping up to be the most prescient book I’ve read. It was written in 2006 but the trends he wrote about are what we’re starting to see today, the nascent AR technology.

12 Questions With Simon “Darkside” Jackson

simon_d_jackson

Simon is one of the main contributors to the Microsoft MRTK framework for HoloLens and also to the XRTK framework for cross-platform mixed reality development. He is the author of several technical books on Unity. He is keeper of the flame on the Unity-UI Extensions source code.

Simon basically really intimidates me. He knows the Microsoft coding stack as well as the Unity stack, which makes him formidable. He’s currently working on extending the XRTK framework to support the Oculus Quest, which means if you have built your HoloLens or Magic Leap app on the XRTK, your app will automagically also run on the Quest thanks to Simon. That’s some seriously cool stuff.

He also happens to be a very nice person who is genuinely concerned about the well being of the people around him – which I found out the easy way over many online and in-person interactions. I’m not totally sure why he promotes himself as of the Darkside since he is clearly more of a Gray Jedi – but that’s not one of the 10 questions, so we may never know. Without further ado, here are Simon’s answers to the 10 Questions:


What movie has left the most lasting impression on you?

The Matrix, it shows us how to stand tall, to face adversity with strength and uncover meaning in this world we call life.”

What is the earliest video game you remember playing?

“Given I have to recognise I’m getting old, my earliest game I recall was Pong on the Atari 2600.  First game console our family owned.  First games would be the penny shuffle machines in the arcades of old .”

Who is the person who has most influenced the way you think?

“William Shatner, for showing us how to boldly go and give us a glimpse of the world I’d like to see us aspire to.”

When was the last time you changed your mind about something?

“Whenever the wife decides  something and I have no other option but to agree.”

What’s a skill people assume you have but that you are terrible at?

“Recruiters are constantly sending me offers for jobs developing in JavaScript or Java, which I’ve avoided for most of my developer life.”

What inspires you to learn?

“My life’s goal is to always learn something new each and every day, to grow and develop.  If we no longer aspire to develop ourselves we cease to be.”

What do you need to believe in order to get through the day?

“I have to believe the coffee will not run out, else the world becomes a much more vicious place.  I also hope to defeat ignorance, but ignorance always finds new ways to baffle me.”

What’s a view that you hold but can’t defend?

“I have long held the belief that humankind will eventually realise its insignificance and start to work towards the betterment of ourselves and the planet we live on.  However, I’m proven wrong each and every day (for now).  Basically, I want the world of Star Trek, not the world of Star Wars.”

What will the future killer Mixed Reality app do?

“Once mixed reality technology finally becomes affordable enough and cool enough to wear all day long, I believe the killer experience will be something that integrates with our everyday.  An app/experience that will enrich the world around us, show us new sights and experiences, and offer us new ways to interact.  Be it a simple experience that adds wonder to a shopping centre experience, or uses geo location whilst visiting historic sights and completely immerse us whilst learning (in stead of just reading signs as we do now).”

What book have you recommended the most?

Snowcrash by Neal Stephenson, it opens up so many new possibilities and levitates towards the dangers of being “plugged in” too much.  Giving us a sense of wonder and danger in equal measure, leading us to live in a world augmented by technology but not driven by it.”

And then Simon volunteered two more unsolicited  questions:

Favourite quote?

“The definition of insanity is doing the same thing over and over again and expecting a different result.
—  Albert Einstein (as well as others).”

Most used phrase?

“Because… unity.”

10 Questions with Suzanne Borders

suzanneborders_mm

Suzanne is the CEO of BadVR – which IMO wins the prize for best company name and probably could easily make a top 10 list for band names, also. Suzanne’s company works with the fascinating world of data visualizations in VR and MR. She is also the recipient of one of the coveted 2019 Magic Leap grants and is a member of Magic Leap’s Independent Creator Program. I met her briefly at an MR event in Mountain View, CA in early 2019 and besides being an amazing advocate for the importance of true 3D data visualizations in spatial experiences, has successfully shown everyone how to be a leader and promoter of mixed reality in the XR world.

What movie has left the most lasting impression on you?

This one is tough! I’m a huge film buff and there have been so many movies that have deeply impacted me and altered my understanding of the universe.

That being said, I think the most impactful film I’ve ever watched is “The Holy Mountain” by Alejandro Jodorowsky. It’s such an explosion of creativity, a surrealistic fever dream that functions on so many levels as a commentary on the human desire to seek truth and enlightenment. Jodorowsky is unlike any other filmmaker out there, a true magician that makes film into high art without losing the ability to make impactful statements about the universal human condition. Any of his films could really be considered my favorite but “The Holy Mountain’ in particular speaks to me the most because it best captures the hero’s journey; our collective desire to seek something greater from life than what we’re given. A lot of surrealistic film is just weird for the sake of being weird and therefore loses impact because it doesn’t use the symbolism of surrealism to make any sort of deeper statement. Jodorowsky is a surrealist in the best sense of the term – all his bizarre unexpected images convey meaning and activate archetypical feelings, drives, and desires in his audience. He’s a master of the subconscious and knows how to access and wield communicative power in this area. Because of this, he’s my creative hero and I look to his work often for inspiration, especially when attempting to craft products that have the ability to touch user’s subconscious. I think this is key when unlocking broad market appeal for products or film or art in general. To really touch and impact a wide audience the experience, the artist or creator must touch on, and involve, a universal archetype. Jodorowsky’s films taught me this lesson and showed me how to execute on it. I want to give a big shout-out and thank you to my filmmaker friend Ryan, who introduced me to them. He, in many ways, has fundamentally changed how I approach any creative challenge by showing me Jodo’s work. 

Beyond “The Holy Mountain,” I’m a big fan of “Belladonna of Sadness” (you will not find a more beautifully animated film ever), “Apocalypse Now” (Brando as Kurtz and his monologue at the end talking about the clarity of evil is a perennial favorite; combining Conrad’s “The Heart of Darkness” with the Vietnam war was a stroke of pure genius), “Funeral Parade of Roses” (a Japanese film that sets the ancient story of Oedipus into the transgender alternative subculture in 1960s Japan; I love it for its ability to utilize archetypical images and stories in an unexpected and creative way), “Hiroshima Mon Amour” (any media by Marguerite Duras is an automatic favorite), and “Last Tango in Paris” (I adore Brando, he’s an absolute legend, and this film touches on so many truths of the human existence, our longing for connection, the power of anonymity, my own personal life makes this film more powerful to me than it will to many, but none the less I adore it). And of course, the visual style and occult symbolism of Dario Argento’s films is a forever favorite (“Suspiria” being the pinnacle of Argento’s work IMO).

Lastly, Fellini’s “8 1/2” was the first film I watched as a child that really unlocked for me the power of cinema and storytelling. Prior to watching it, I had dismissed film as some inferior commercial medium. I saw it as cheap mindless entertainment for the masses without substance or meaning. For me at that time, my understanding of film was limited to boring and poorly made summer blockbusters. I remember clearly popping in the 8 1/2 VHS tape at age 17 without any expectation, just another mindless story to pass the long summer hours of adolescence. But the story that jumped out from the screen – starting with Fellini’s infamous opening dream sequence – absolutely captivated me. I found myself profoundly touched at the end of the film, crying even, and realized that I had been changed forever for having watched it. The message of the film – our flawed desire for human connection and all the broken and dysfunctional ways we pursue it – resonated with me at such a level that I have, decades later, never forgotten that moment. From that point on, I considered film and storytelling a high art that held the potential to change the world. Of course, not all film or stories rise to this potential and I’ve continued to be disappointed by mainstream commercial film in such a major way that I don’t even engage with it anymore. But 8 1/2 made me realize the potential of film as a medium for spiritual transformation. It showed me the power of storytelling had to bring humanity together and demonstrated the medium’s ability to hold up to the audience a mirror of themselves, helping them pursue a deeper understanding of both themselves the world around them.

Obviously, I adore film. It is one of my biggest sources of creative inspiration for all my technical work. I love immersive tech because one builds experiences, not screens. MR holds the same potential to affect deep spiritual change and transformation in users and that interests me immensely.

What is the earliest video game you remember playing?

LOOM! I remember playing it on the first computer my father bought for our family, when I was 6 or 7 years old. I remember spending hours and hours sitting in front of the computer playing, captivated by the beautiful game art. LucasFilm games are the best, but in particular Loom really did it for me. I loved (and still love) that the primary way Bobbin Threadbare (main character) interacted with his world was through music and sound. Such an original and creative idea!

Plus, you could cast spells to literally rip apart the fabric of existence, calling forth the lord of the dead, ripping open cemeteries to speak to the souls of the deceased. You could exist beyond space and time and your character could visit this beautiful lake floating in the void, populated by swans who spoke to you in parables of truth. As a goth kid and a lover of poetry, this was beyond transformative for me. I wanted to live in Loom! Additionally, the game came with this amazing backstory about a world full of guilds and weavers of destiny. I used to listen to the backstory tape, complete with a dramatic reenactment, and pretend I was Bobbin Threadbare. Loom will forever be my favorite game of all time.

Myst is a very close second!

Who is the person who has most influenced the way you think?

This is a difficult one – there have been so many amazing mentors in my life and each one of them has taught me something important, about myself, about my experience of the world.

As mentioned, Jodorowsky has been a major influence on me and all that I create. I’ve followed him around the world and I’ve actually met him in real life. I was fortunate enough to have him read my tarot in Paris and that reading truly changed my life. I won’t go into details because it was a deeply personal reading, but it transformed me without doubt. I also was lucky enough to meet him again at the Egyptian Theater in Los Angeles and at this event he dropped many nuggets of wisdom as well. 

I’ve also learned a lot from the coterie or filmmaker friends that I’ve developed here in Los Angeles. The one in particular who introduce me to Jodorowsky has taught me a lot about the creative journey. He’s taught me how to dive into my creative subconscious to identify those valuable universal, broadly resonate true ideas. I’ve always been fascinated with the ability to broadly affect so many different types of people with one single idea and I wanted to translate that to my products. When you talk to someone who wrote or directed a hugely successful film, you find they have this ability to take a concept and distill it down into its most basic form. However, instead of that process being reductive or simplistic, you find that this distillation strengthens the idea and makes it more crystalline and clear and most importantly, universally accessible. The ability to take complex, nuanced, ideas and make them resonate with the broadest audience possible is one that I value highly. I’m very glad to have had a group of people who’ve helped teach me this skill. Regardless of the difference in our industries.

When was the last time you changed your mind about something?

Whew boy, I change my mind all the time, constantly, on a second to second basis! I’m always ingesting data about my world, through experiences, books, travel, websites, music, films, poems, and products. Even subconsciously, my mind is always picking up on new data about my world, which then changes my understanding of the universe. Plus, I believe everything constantly changes, so I have to keep pace with this change and adjust my thoughts and theories to mesh with the latest information.

A system that runs off absolutes and stasis is brittle and bound for failure. Only by being nimble and changeable can any system truly be strong and resilient. As such, I agree very much with Nassim Nicholas Taleb and his concept of anti-fragility. Anti-fragility involves growth through stress and I’d like to think all of my internal world models fall into this category by being responsive in real time to new data that stresses their limits, structures, and boundaries. 

What’s a skill people assume you have but that you are terrible at?

Everyone assumes I’m skilled at math because my company works with data. But I’m actually numerically dyslexic (yes, it’s a real thing) and numbers have always been a real struggle for me. That’s one of the many inspirations for BadVR – my desire to work with data but my lack of technical skill with which to do so. I am in many ways the non-technical person for whom my product is built; I am my user. This gives me the power and the passion to build and also gives me the empathy needed to deliver an effective product that makes data accessible to everyone.

Of course, being acutely aware of this shortcoming I’ve assembled a team of very highly talented mathematical geniuses that augment my own weaknesses. So, to allay any question about my company’s ability to deliver a highly technical product, I want to underscore the idea that my company is not solely comprised of me. The heart and soul of BadVR is our team, and they are deeply capable in all the ways that I am not. That variety of skills and talent is what makes us powerful. We all balance each other’s weaknesses and strengths and, in doing so, create something better than any of us could ever achieve independently. 

What inspires you to learn?

I don’t need inspiration for this! I’m endlessly curious about everything, all the time. I never learned to stopped asking “why?” Learning is my default state of being. Anytime I see anything, or experience anything, it inspires me to ask more questions, to dig deeper, to understand further. My google search history is full of things like “how did dinosaurs procreate? What is dirt? Why is dark meat dark?” I just wonder and google and learn all the time. Every experience is an impetus for learning; a reason to dive into the whys, hows, and whats of yet another line of inquiry.

What do you need to believe in order to get through the day?

I have to believe that life doesn’t end with death. That I will again see the people I love who I’ve lost. If there isn’t an afterlife or there isn’t an alternate timeline we’re we meet again, I can’t continue. I’ve lost too many loved ones to be able to function without the belief that I will see them again. It goes without saying then that I believe in reincarnation, in the broadest sense. I strongly believe that the people we love never leave us and that in some way we end up back together. It’s not an evidence-based belief – besides anecdotal evidence anyways – but I must believe it. I do believe it. I will always believe it. Other, the loneliness is crushing, overwhelming; the feeling akin to being forever a planetary stranger at the very end of the world.

What’s a view that you hold but can’t defend?

I have plenty of beliefs that don’t have scientific, evidence-based support. I can always defend every belief I have if you allow anecdotal evidence or emotional appeals. Some examples include my belief in the tarot, in astrology, in dream work, psychic powers, aliens, the collective subconscious, Bigfoot, the Missouri Skunk Ape, and ghosts. I’d be more than happy to argue their existence on an emotional and anecdotal level with anyone. But science of course doesn’t support or embrace such parapsychology and cryptozoology. This doesn’t stop me from believing, though. Many of the most important questions in life cannot be answered by science. I think the scientific method is important for lesser questions but for the big questions of life like “Why are we here? What is our purpose? What is the meaning of life?” — science fails. I’m more interested in the answers offered by faith and spirituality than I am in the answers offered by science, for these sorts of questions. In the face of the eternal, science can seem so small and pedantic. But of course, for the mundane it is very important.

What will the future killer Mixed Reality app do?

Visualize data and allow for immersive analysis! Data is the killer app for mixed reality. I firmly believe that, and I fully believe my company, BadVR, will be the industry standard tool for working with data immersively. I may be biased as BadVR is my company, but hey that’s what I believe! Our unique approach, mixing art with logic, the abstract with the concrete, is exactly the way this product needs to be approached. In the future, everyone will be able to easily see and interact with incredibly large, abstract and geospatial datasets with ease. We will think of data as an oracle; a source of truth. It’s important that everyone be able to access such a powerful product, which is a major focus of BadVR – universal accessibility.

What book have you recommended the most?

The Panic Fables” by Alejandro Jodorowsky. A book of spiritual comics that delivers small truths via 1-page comics. It’s an easy entry point into the Jodo-sphere!

Narcopolis” by Jeet Thayil. One of my all-time favorite passages can be found in this novel. It’s about a large cast of characters who frequent an opium den in Bombay (before it became Mumbai). Thayil is one of the few writers who can write prose that reads like poetry. I am a forever a huge fan!

The Hour of the Star” by Clarice Lispector. She deconstructs language and storytelling to deliver a narrative about a poor Brazilian girl and her search for meaning and transcendence in a world that doesn’t want or even see her. It is a visceral gut-punch of truth. Anything by Lispector is wonderful, but this story in particular is my favorite.

I will leave you with a quote from Lispector:

“I do not know much. But there are certain advantages in not knowing. Like virgin territory, the mind is free of preconceptions. Everything I do not know forms the greater part of me. And with this I understand everything. The things I do not know constitute my truth.”

EUE-Connect–An Ideal Dev Conference in Utrecht

The Florin Pub, UtrechtI have a new favorite conference, EUE-Connect in Utrecht, Holland. EUE-Connect is an invitation only annual event held for two days each year. It brings together software developers, 3D modelers, FX specialists and experience agency people to share knowledge about the state of the art where all these professions meet. I got invited to add mixed reality to the mix this year and hopefully grow that aspect of the conference out in the future.

EUE-Connect is the brain child of Joep van der Steen, who is also the beating heart and conscience of the conference. He has managed to create an ongoing and organic event that remains friendly without ever being bureaucratic or fake – really the ultimate goal of any conference, though one that is difficult to maintain.

Scotch Eggs, British Museum Restaurant

Part of the secret to this is the FrienDA policy that Joep maintains. What this means, first of all, is that I can’t show you pictures or slides taken inside the Florin pub, where the event is held – so I’ll be showing you pictures of some tasty meals I had in London the following week. The other thing it means is I can’t talk specifically about the content of the talks I heard. This is all so that the speakers, some of whom are fairly well placed in some major corporations in the software, gaming and 3D modeling industries, feel free to talk about what they are most passionate about.

Tea and Scones, British Museum Restaurant

The reason this is a FrienDA rather than an NDA is to make clear why we follow these loose guidelines. It is to be friendly and respectful of others who are going out their way to share what they know, to be a bit vulnerable by giving their opinions, and to allow people the freedom to be wrong. This is a civilized way to maintain confidences.

Eggs Benedict on bubble 'n squeek, Ozone Coffee

Coming from a Microsoft conference world, this seems like a much better way to treat one another and a much more successful way to keep faith with one another. In the Microsoft world, NDAs tend to be used not to maintain technical or product secrets anymore, but to broadly maintain corporate and personal reputations. It is at a state where no actually useful information is actually disseminated by Microsoft to their partners anymore, yet every trivial email is surrounded in secrecy for perpetuity. Which is a bit silly.

Fried egg on Toast, Ozone Coffee

The other remarkable thing I ran into at the conference, as a typical Microsoft developer, was that this is a community built around non-Microsoft tools like Unreal, 3DS Max and to some extent Unity. It had never occurred to me before that there are other tools out there that people build their careers and reputations around in much the same way developers become knowledgeable and at the same time dependent upon certain Microsoft technologies.

Two pints, Belfast

In fostering this wonderful conference, Joep used some important tools that I think others might learn from. First, when some large vendors who sponsored the event in the past began to dominate the sessions, he simply cut back their participation. It is natural to become beholden to someone who gives your event lots of money, but at the same time the quality and sincerity of the event can suffer from it. Joep saw this happen in the past and simply arranged to operate on a different budget. Second, when he felt the conference was getting too large, he cut back attendance. This is rather the opposite of the ethos of most conference who see their goal as one of scaling up in size rather than scaling up in quality. Bucking that trend is quite something.

Menu from St John's, tail to snout Michelin star eatery

So over the next year, if you should be fortunate enough to receive an invitation to the EUE-Conference out of recognition for your excellent work in the FX, gaming, or software industries – or simply through good luck as I did – do not hesitate to accept. It will be a conference going experience like none other.

Extending Chatbots with Azure Cognitive Services

Microsoft Bot Framework is an open source SDK and set of tools for developing chatbots. One of the advantages of building chatbots with the Bot Framework is that you can easily integrate your bot service with the powerful AI algorithms available through Azure Cognitive Services. This is a quick and easy way to give your chatbot super powers when you need them.

Microsoft Cognitive Services is an ever-growing collection of algorithms developed by experts in the fields of computer vision, speech, natural-language processing, decision assistance, and web search. The services simplify a variety of common AI-based tasks, which are then easily consumable through web APIs. The APIs are also constantly being improved and some are even able to teach themselves to be smarter based on the information you feed them.

Here is a quick highlight reel of some of the current Cognitive Services available to chatbot creators:

Language

People have a natural ability to say the same thing in many ways. Intelligent bots need to be just as flexible in understanding what human beings want. The Cognitive Service Language APIs provide language models to determine intent, so your bots can respond with the appropriate action.

The Language Understanding Service (LUIS) easily integrates with Azure Bot Service to provide natural language capabilities for your chatbot. Using LUIS, you can classify a speaker’s intents and perform entity extraction. For instance, if someone tells your bot that they want to buy tickets to Amsterdam, LUIS can help identify that the speaker intends to book a flight and that Amsterdam is a location entity for this utterance.

While LUIS offers prebuilt language models to help with natural language understanding, you can also customize these models for particular language domains that are pertinent to your needs. LUIS also supports active learning, allowing your models to get progressively better as more people communicate with it.

Decision assist services

Cognitive Services has knowledge APIs that extend your bot’s ability to make judgments. Where the language understanding service helps your chatbot determine a speaker’s intention, the decision services help your chatbot figure out the best way to respond. Personalizer, currently in preview, uses machine learning to provide the best results for your users. For instance Personalizer can make recommendations or rank a chatbot’s optional responses to select the best one. Additionally, the Content Moderator service helps identify offensive language, images, and video, filtering profanity and adult content.

Speech recognition and conversion

The Speech APIs in Cognitive Services can give your bot advanced speech skills that leverage industry-leading algorithms for speech-to-text and text-to-speech conversion, as well as Speaker Recognition, a service that lets people use their voice for verification. The Speech APIs use built-in language models that cover a wide range of scenarios with high accuracy.

For applications that require further customization, you can use the Custom Recognition Intelligent Service (CRIS). This allows you to calibrate the language and acoustic models of the speech recognizer by tailoring it to the vocabulary of the application and to the speaking style of your bot’s users. This service allows your chatbot to overcome common challenges to communication such as dialects, slang and even background noise. If you’ve ever wondered how to create a bot that understands the latest lingo, CRIS is the bot enhancement you’ve been looking for.

Web search

The Bing Search APIs add intelligent web search capabilities to your chatbots, effectively putting the internet’s vast knowledge at your bot’s fingertips. Your bot can access billions of:

· webpages

· images

· videos

· news

· local businesses

Image and video understanding

The Vision APIs bring advanced computer vision algorithms for both images and video to your bots. For example, you can use them to recognize objects, people’s faces, age, gender, or even feelings.

The Vision APIs support a variety of image-understanding features. They can categorize the content of images, determining if the setting is at the beach or at a wedding. They can perform optical character recognition on your photo, picking out road signs and other text. The Vision APIs also support several image and video-processing capabilities, such as intelligently generating image or video thumbnails, or stabilizing the output of a video for you.

Summary

While chatbots are already an amazing way to help people interact with complex data in a human-centric way, extending them with web-based AI is a clear opportunity to make them even better assistants for people. Easy to use AI algorithms like the ones in Microsoft Cognitive Services remove language friction and give your chatbots super powers.

Creating a Chatbot with Microsoft Azure QnA Maker and Alexa

QnA Maker is Microsoft’s easy-to-use, cloud-based API for turning a public-facing FAQ page, product manuals, and support documents into a natural-language bot service. Because it takes in pre-vetted data to use as its “smarts,” it’s one of the easiest ways to build a powerful bot for your company.

Alexa, of course, is the world’s most pervasive host for conversational bots. It’s found in homes, corporate boardrooms, and anywhere else people want easy access to web-based information.

In this article, I will show you how to attach the plumbing to push the Q&A information your company wants users to know onto the conversational bot devices that they are most frequently using.

Part 1: Creating a bot service with QnA Maker

To get started, I first created a free Azure account to play with. I then went to the QnA Maker portal page and clicked the Create a knowledge base tab at the top to set up the knowledge base for my bot. I then clicked the blue Create a QnA service button to make a new QnA service with my free Azure account.

I followed the prompts throughout the process, which made it easy to figure out what I needed to do at each step.

In step 2, I selected my Azure tenant, Azure subscription name, and Azure resource name associated with the QnA Maker service. I also chose the Azure QnA Maker service I’d just created in the previous step to host the knowledge base.

I then entered a name for my knowledge base and the URL of my company’s FAQ to use as the brains for my knowledge base. If you just want to test this part out, you can even use the FAQ for QnA Maker itself.

QnA Maker has an optional feature called Chit-chat that let me give my bot service a personality. I decided to go with “The Professional” for this, but definitely would like to try out “The Comic” at some point to see what that’s like.

The next step was just clicking the Create your KB button and waiting patiently for my data to be extracted and my knowledge base to be created.

Once that was done, I opened the Publish page in the QnA Maker portal, published my knowledge base, and hit the Create Bot button.

After filling out additional configuration information for Azure that was specific to my account, I had a bot deployed with zero coding on Microsoft Bot Framework v4. I could even chat with it using the built-in “Test in Web Chat” feature. You can find more details in this cognitive services tutorial.

Part 2: Making your bot service work on Alexa

To get the bot service I created above working with Alexa, I had to use an open-source middleware adapter created by the botbuilder community. Fortunately, the Alexa Middleware Adapter was available as a NuGet package for Visual Studio.

I went to the Azure portal and selected the bot I created in the previous section. This gave me the option to “Download Bot source code.” I downloaded my bot source code as a zip file, extracted it into a working directory, and opened it up in Visual Studio 2017.

When the bot is automatically generated, it’s created with references to the Microsoft.AspNetCore.App NuGet package and the Microsoft.AspNetCore.App SDK. Unfortunately, this had compatibility issues with the middleware package. To fix this, I right-clicked on the Microsoft.AspNetCore.App NuGet package in the Solution Explorer window and removed it. This automatically also removed the equivalent SDK. To get back all the DLLs I needed, I used NuGet Package Manager to install the Microsoft.AspNetCore.All (2.0.9) package instead. Be sure to install this specific version of the package to ensure compatibility.

After making those adjustments to the solution, I went to the Visual Studio menu bar and selected Tools -> Nuget Package Manager -> Manage Nuget Packages for Solution. I searched for Adapters.Alexa and installed the Bot.Builder.Community.Adapters.Alexa package.

If your downloaded app is missing its Program.cs or Startup.cs file, you will need to create these for you project in order to build and publish. In my case, I created a new Microsoft Bot Builder v4 project and copied these two files from there. In the Startup method of the Startup class I created a ConfigurationBuilder to gather my app settings.

Then in the ConfigureServices and Configure methods, I added a call to services.AddAlexaBot and UseAlexa in order to enable the Alexa middleware and set up a special endpoint for calls from Alexa.

Following these code changes, I published the Web App Bot back to my Azure account. The original QnA Bot Service now has an additional channel endpoint for Alexa. The Alexa address is the original Web App Bot root address with /api/skillrequests added to the end.

At this point, I was ready to go to my Amazon account and create a new Alexa skill. I went to: https://developer.amazon.com/alexa and signed in. (If you don’t already have a developer account, you will need to enter your information and agree to the developer EULA.) Next, I tapped the Alexamenu item at the top of the developer page and selected Alexa Skills Set. This took me to https://developer.amazon.com/alexa/console/ask, where I clicked the Create Skill button.

I wrote in a unique name for my skill, selected Custom for the model, and clicked Create skill. On the following screen, I selected Start from Scratchfor my template.

I selected JSON Editor.

Next, I opened another web browser and went to this source code, and copied the example JSON found in the README.md file.

I returned to the web browser that had the Amazon Alexa portal opened and pasted the JSON into the box. I change the invocationName to the name of my skill, clicked Save Model, and finally clicked Build Model.

After waiting patiently for the build to complete, I selected Endpoint in the left navigation window and clicked HTTPS. I then entered the address of the Azure App Service URL and added /api/skillrequests to the end.

To distribute my Alexa skill so people can use it on their own Amazon devices, I clicked the Distribution link in the Alexa developer console and followed the instructions from there.

And before I knew it, I was able to have a conversation with my company’s FAQ page, using the QnA Maker’s professional chit-chat personality, from my living room.

Microsoft’s convergence of chatbots and mixed reality

One of the biggest trends in mixed reality this year is the arrival of chatbots on platforms like HoloLens. Speech commands are a common input for many XR devices. Adding conversational AI to extend these native speech recognition capabilities is a natural next steps toward a future in which personalized virtual assistant backed by powerful AI accompany us in hologram form. They may be relegated to providing us with shopping suggestions, but perhaps, instead, they’ll become powerful custom tools that help make us sharper, give honest feedback, and assist in achieving our personal goals.

If you have followed the development of sci-fi artificial intelligence in television and movies over the years, the move from voice to full holograms will seem natural. In early sci-fi, such as HAL from the movie 2001: A Space Odyssey or the computer from the original Star Trek, computer intelligence was generally represented as a disembodied voice. In more recent incarnations of virtual assistance, such as Star Trek Voyager and Blade Runner 2049, these voices are finally personified by full holograms of the Emergency Medical Hologram and Joi.

In a similar way, Cortana, Alexa, and Siri are slowly moving from our smartphones, Echos, and Invoke devices to our holographic headsets. These are still early days, but the technology is already in place and the future incarnation of our virtual assistants is relatively clear.

The rise of the chatbot

For Microsoft’s personal digital assistant Cortana, who started her life as a hologram in the Halo video games for Xbox, the move to holographic headsets is a bit of a homecoming. It seems natural, then, that when Microsoft HoloLens was first released in 2016, Cortana was already built into the onboard holographic operating system.

Then, in a 2017 article on the Windows Apps Team blog, Building the Terminator Vision HUD in HoloLens, Microsoft showed people how to integrate Azure Cognitive Services into their holographic head-mounted display in order to provide smart object recognition and even translation services as a Terminator-like HUD overlay.

The only thing left to do to get to a smart virtual assistant was to tie together the HoloLens’s built-in Cortana speech capabilities with some AI to create an interactive experience. Not surprisingly, Microsoft was able to fill this gap with the Bot Framework.

Virtual assistants and Microsoft Bot Framework

Microsoft Bot Framework combines AI backed by Azure Cognitive Serviceswith natural-language capabilities. It includes a set of open source SDKs and tools that enable developers to build, test, and connect bots that interact naturally with users. With the Microsoft Bot Framework, it is easy can create a bot that can speak, listen, understand, and even learn from your users over time with Azure Cognitive Services. This chatbot technology is sometimes referred to as conversational AI.

There are several chatbot tools available. I am most familiar with the Bot Framework, so I will be talking about that. Right now, chatbots built with the Bot Framework can be adapted for speech interactions or for text interactions like the UPS virtual assistant example above. They are relatively easy to build and customize using prepared templates and web-based dialogs.

One of my favorite ways to build a chatbot is by using QnA Maker, which lets you simply point to an online FAQ page or upload product documentation to use as the knowledge base for your bot service. QnA Maker then walks you through applying a chatbot personality to your knowledge base and deploying it, usually with no custom coding. What I love about this is that you can get a sophisticated chatbot rolled out in about half a day.

Using the Microsoft Bot Framework, you also have the ability to take full control of the creation process to customize your bot in code. Bot apps can be created in C#, JavaScript, Python or Java. You can extend the capabilities of the Bot Framework with middleware that you either create yourself or bring into your code from third parties. There are even advanced capabilities available for managing complex conversation flows with branches and loops.

Ethical chatbots

Having introduced the idea above of building a Terminator HUD using Cognitive Services, it’s important to also raise awareness about fostering an environment of ethical AI and ethical thinking around AI. To borrow from the book The Future Computed, AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable. As we build all forms of chatbots and virtual assistants, we should always consider what we intend our intelligent systems to do, as well as concern ourselves with what they might do unintentionally.

The ultimate convergence of AI and mixed reality

Today, chatbots are geared toward integrating skills for commerce like finding directions, locating restaurants, and providing help with a company’s products through virtual assistants. One of the chief research goals driving better chatbots is to personalize the chatbot experience. Achieving a high level of personalization will require extending current chatbots with more AI capabilities. Fortunately, this isn’t a far-future thing. As shown in the Terminator HUD tutorial above, adding Cognitive Services to your chatbots and devices is easy to do.

Because holographic headsets have many external sensors, AI will also be useful for analyzing all this visual and location data and turning it into useful information through the chatbot and Cognitive Services. For instance, cameras can be used to help translate street signs if you are in a foreign city or to identify products when you are shopping and provide helpful reviews.

Finally, AI will be needed to create realistic 3D model representations of your chatbot and overcome the uncanny valley that is currently holding back VR, AR, and MR. When all three elements are in place to augment your chatbot — personalization, computer vision, and humanized 3D modeling — we’ll be that much closer to what we’ve always hoped for — personalized AI that looks out for us as individuals.

Here is some additional reading on the convergence of chatbots and MR you will find helpful:

Increasing Business Reach with Azure Bot Service Channels

Where do bots live? It’s a common misconception that bots live on your Echo Dot, on Twitter, or on Facebook. To the extent bots call anywhere their home, it’s the cloud. Objects and apps like your iPhone and Skype are the “channels” through which people communicate with your bot.

Azure Bot Service Channels

Out of the box, Azure Bot Service supports the following channels (though the list is always growing):

  • Cortana
  • Email
  • Facebook
  • GroupMe
  • Kik
  • LINE
  • Microsoft Teams
  • Skype
  • Skype for Business
  • Slack
  • Telegram

Through middleware created by the Bot Builder Community, your business’s bots can reach additional channels like Alexa and Google.

With Direct Line, your developers can also establish communications through your bots and your business’s custom apps on the web and on devices.

Companies like Dixons Carphone, BMW, Vodafone, UEI, LaLiga, and UPS are already using Microsoft Azure Bot Service support for multiple channels to extend their Bot reach.

UPS Chatbot, for instance, delivers shipping information and answers customer questions through voice and text on Skype and Facebook Messenger. UPS, which invests more than $1 billion a year in technology, developed its chatbot in-house and plans to continue to update its functionality, including integration with the UPS My Choice® platform using Direct Line. In just the first eight months, UPS Bot has already had more than 200,000 conversations over its various channels.

LaLiga, the Spanish football league, is also reaching its huge and devoted fan base through multiple channels with Azure Bot Service. It is estimated that LaLiga touches 1.6 billion fans worldwide on social media.

Using an architecture that combines Azure Bot Service, Microsoft Bot Framework and multiple Azure Cognitive Services such as Luis and Text Analysis, LaLiga maintains bots on Skype, Alexa and Google Assistant that use natural language processing. NLP allows their chatbots to understand both English and Spanish, their regional dialects, and even the soccer slang particular to each dialect. They are even able to use a tool called Azure Monitor anomaly detection to identify new player nicknames created by fans and then match them to the correct person. In this and similar ways, LaLiga’s chatbots are always learning and adapting over time. LaLiga plans to deploy its chatbots to almost a dozen additional channels in the near future.

Conclusion

Because social media endpoints are always changing, developing for a single delivery platform is simply not cost-effective. Channels provide businesses with a way to develop a bot once but deploy it to new social media platforms as they appear on the market and gain influence. At the same time, your core bot features can constantly be improved, and these improvements will automatically benefit the pre-existing channels people use to communicate with you.

Digital Heroism in a DeepFake World

training_preview6

I recently did a talk on deepfake machine learning which included a long intro about the dangers of deep fakes. If you don’t know what deepfakes are, just think of using photoshop to swap people’s faces, except applied to movies instead of photos, and using AI instead of a mouse and keyboard. The presentation ended with a short video clip of Rutgar Hauer’s “tears in rain” speech from Blade Runner, but replacing Hauer’s face with Famke Janssen’s.

But back to the intro – besides being used to make frivolous videos that insert Nicholas Cage into movies he was never in (you can search for it on Youtube), it is also used to create fake celebrity pornography and worse of all to create what is known as “revenge porn” or just malicious digital face swaps to humiliate women.

Noelle Martin has, in her words, become the face of the movement against image based abuse of women. After years of having her identity taken away, digitally altered, and then distributed against her will on pornography websites since she was 17 years old, she decided to regain her own narrative by speaking out publicly about the issue and increasing awareness of it. She was immediately attacked on social media for bringing attention to the issue and yet she persisted and eventually helped to criminalize image based sexual abuse in New South Wales, Australia, with a provision specifically about altered images.

Criminalization of these acts followed at the commonwealth level in Australia. She is now working to increase global awareness of the issue – especially given that the webservers that publish non-consensual altered images can be anywhere in the world. She was also a finalist in the 2019 Young Australian of the Year award for her activism against revenge porn and for raising awareness of the way modern altered image technology is being used to humiliate women.

n_martin

I did a poor job of telling her story in my presentation this week.  Beyond that, because of the nature of the wrong against her, there’s the open question of whether it is appropriate even to try to tell her story – after all, it is her story to tell and not mine.

Fortunately, Noelle has already established her own narrative loudly and forcefully. Please hear her story in her own words at Tedx Perth.

Once you’ve done that, please watch this Wall Street Journal story about deepfake technology in which she is featured.

When you’ve heard her story, please follow her twitter account @NoelleMartin94 and help amplify her voice and raise awareness about the dark side of AI technology. As much as machine learning is in many ways wonderful and has the power to make our lives easier, it also has the ability to feed the worst impulses in us. Because ML shortens the distance between thought and act, as it is intended to do, it also easily erases the consciousness that is meant to mediate our actions: our very selves.

By speaking out, Ms. Martin took control of her own narrative. Please help her spread both the warning and the cure by amplifying her story to others.