New York tech leaders sound off on AI hype
With all the hype around AI, we asked several local AI leaders the same question: What about the technology still isn’t getting enough attention?
Here’s what they had to say:
Jack Kokko, Founder & CEO, AlphaSense:
”It's been well-covered that AI in the workplace, including more sophisticated applications like GenAI, has been met with discomfort by many employees due to uncertainty and fears around its potential to replace human capital. However, AI’s ability to create new avenues for meaningful work deserves more recognition. Like any emerging technology, if applied correctly, AI can help professionals not only complete repetitive, monotonous tasks at unparalleled speed and precision — it can also empower employees to elevate their role in the workplace as they leverage AI like a personal assistant, helping them critically think, innovate, and analyze, doubling their productivity.
Ultimately, AI can help bring ideas to life, but the art of human-driven analysis can't be replicated. A comprehensive AI story should emphasize its role in unleashing maximum human potential safely and responsibly, enabling us to move faster, explore fresh ideas, and take on more intellectually stimulating challenges.”
Paris Heymann, Partner, Index Ventures:
”There's huge interest in AI from business customers. Many of the most prominent use cases for AI today are function-specific (for example, AI for marketing, AI for copywriting, AI for programming). In the future, I think there will be greater focus on Vertical AI products built for specific industries rather than specific functions: AI for restaurants, AI for teachers, AI for law firms, AI for veterinarians. The opportunity for AI trained on industry-specific data is endless.”
Sarah Nagy, Founder & CEO, Seek AI:
”I think that there needs to be more attention on how to ask follow-up questions to AI and take advantage of its flexibility with respect to being corrected. Sometimes I will see people get an unsatisfactory answer back from, say, ChatGPT and have no idea how to respond. Correcting ChatGPT can be very similar to correcting a human, but this is counterintuitive to many people and I'd like to see more visibility around this.”
Anju Kambadur, Head of AI Engineering, Bloomberg:
"The recent focus on developing large general-purpose language models has drawn academic attention away from where they can make important contributions, particularly in the area of domain- and task-specific models, which often yield superior results. In fact, it has slowed down progress in many areas of NLP research, as the sector has recently focused less on problems that cannot be solved with a chat-like interface, as opposed to the deep dives into specific challenging problems that require domain expertise.
This is an area in which academia is well suited to playing a leading role and making big contributions. In fact, focusing on domain-specific knowledge will benefit us in acquiring a better model aligned with how humans learn domain-specific knowledge. We must renew our focus on developing and evaluating domain-specific models and enable academics to focus their attention where industry underinvests, such as low-resource languages, dialects and variations in language, among other things."
Brian Schechter, Partner, Primary:
”An unbelievably hype-worthy but overlooked phenomenon is the degree to which brilliant young people, for whom coding is a first language and computer interactions are as natural as breathing, have gone all in on AI. I've encountered more college dropouts turned founders in the last year than in the last decade. One recently said to me, "I thought I was born too late to explore the earth and too early to explore the stars, but now I believe that I was born at the most exciting time ever, the dawn of super intelligence." They view AGI as inevitable, and believe that contributing to that inevitability is the most worthwhile thing imaginable.”
John Dickerson, Co-founder & Chief Scientist, Arthur:
”We need to focus on the data. LLMs trained from scratch on bad data will be bad. Popular LLMs fine-tuned on bad data will be bad. And, LLMs implanted in retrieval augmented generation (RAG) systems, the dominant industry paradigm, with bad private data will be bad. The data matters through the full LLMops pipeline.
Beyond that, adversaries can influence the data that is used to train LLMs and other foundation models to control inference-time behavior. That has been an issue in traditional ML and will continue to be an issue now. The red teaming efforts of USG at, for example, Defcon are a necessary but insufficient step here. We need more.”
Alex Sambvani, Co-founder and CEO, Slang.ai:
”AI is certainly having a moment — to me, what is lacking in the mainstream narrative is real business metrics being driven by AI. What are these novel solutions actually doing for real businesses and/or consumers? As we move forward, I think paying attention to the real world impact of AI, taking it from a conceptual to a tangible level, will be key.”