Dr. Aharonson received her Msc in physics at the Technion, Haifa and her PhD in electrical engineering from Tel Aviv University, and completed her post-doctoral and research fellow at Eaton Peabody Laboratory, Harvard University, Boston. Prior to joining Afeka, Dr. Aharonson was founder and CTO of NexSig, neurological examination technologies Ltd., a company that developed a biometric signal processing based technology for detection of cognitive decline. Today, Dr. Aharonson is senior lecturer in the department of software engineering at the Afeka Academic College of Engineering and she is also a senior researcher at the ACLP.
Yossi Bar-Yosef received his B.Sc. and M.Sc. degrees in electrical engineering in 1993 and 2003, respectively, both from Tel-Aviv University. He is currently pursuing the Ph.D. degree in electrical engineering at Tel-Aviv University, Israel. From 1993 to 2000 he was with the IDF, working on fast signal processing algorithms. From 2000 to 2015 he was with ISD Israel, ECB, and NICE Systems, working on speech processing and machine learning algorithms. His research interests include information theory, machine learning, and signal processing.
Yuval Bistritz received his B.Sc. degree in physics and M.Sc. and Ph.D. in electrical engineering from Tel Aviv University. Between 1979 and 1984, he held various assistant and teaching positions in the Department of Electrical Engineering, Tel Aviv University, and in 1987 he joined the department as a Faculty Member. From 1984 to 1986, he was a Research Scholar in the Information System Laboratory, Stanford University. From 1986 to 1987, he was with AT&T Bell Labs and from 1994 to 1996 with the DSP Group. His research interests are within the areas of signal processing and system theory, with a current focus on a computer algebra approach to some signal processing and stability algorithms and other issues in speaker recognition. Dr. Bistritz received the Chaim Weizmann Fellowship award for Postdoctoral Research in 1984, the distinguished researcher award from the Israeli Technological Committee in 1992 and the Fellow of the IEEE in 2003.
Israel Cohen received B.Sc. (Summa Cum Laude), M.Sc. and Ph.D. degrees in electrical engineering from the Technion - Israel Institute of Technology, Haifa, Israel, in 1990, 1993 and 1998, respectively. From 1990 to 1998, he was a Research Scientist with RAFAEL Research Laboratories, Haifa, Israel Ministry of Defense. From 1998 to 2001, he was a Postdoctoral Research Associate with the Computer Science Department, Yale University, New Haven, CT. In 2001 he joined the Electrical Engineering Department of the Technion, where he is currently an Associate Professor. Dr. Cohen is a recipient of the Alexander Goldberg Prize for Excellence in Research, and the Muriel and David Jacknow award for Excellence in Teaching. He served as Associate Editor of the IEEE Transactions on Audio, Speech, and Language Processing and IEEE Signal Processing Letters. His research interests are statistical signal processing, analysis and modeling of acoustic signals, speech enhancement, noise estimation, microphone arrays, source localization, blind source separation, system identification and adaptive filtering.
Koby received a PhD (2004) in Computer science and BSc (1999) in mathematics, physics and computer science all from the Hebrew university Jerusalem and all Summa cum laude. He was a postdoctoral fellow and a research associate at the Department of Computer and Information Science, University of Pennsylvania between 2004 and 2009. Koby joined the Technion Department of Electrical Engineering in 2009, where he is currently an associate professor. Koby has published more than a dozen journal papers and sixty conference papers, is a member of the editorial board of machine learning journal and journal of machine learning research, served in the program committee of leading conferences in machine learning and organized several international workshops. His research interests are in the design, analysis and study of machine learning and recognition methods and their application to real world data and particularly natural language processing.
M.S. and Ph.D. in electrical engineering from the Universitat Politècnica de Catalunya (UPC, 1986 and 1989). Member of the Department of Signal Theory and Communications since 1986, currently as Full Professor. Visitor, from August 1991 to July 1992, of the Signal and Image Processing Institute, University of Southern California. He received the 1992 Marconi Young Scientist Award. He was co-founder (1999) of the company VERBIO S.L. specialized in spoken language technology. From 2006 to 2010, he was director of the Center for Language and Speech Technologies and Applications (TALP). He has published more than 150 scientiﬁc papers in statistical signal analysis and their application in communication systems, speech processing and statistical machine translation. During the last 5 years, his research has focused on (spoken) language translation, participating in several Spanish (AVIVAVOZ, BUCEADOR) and EU funded projects (FAME, LC-STAR, TC-STAR and FAUST).
Sadaoki Furui received B.S., M.S., and Ph.D. degrees from the University of Tokyo, Japan in 1968, 1970, and 1978, respectively. After joining the Nippon Telegraph and Telephone Corporation (NTT) Labs in 1970, he has worked on speech analysis, speech recognition, speaker recognition, speech synthesis, speech perception, and multimodal human-computer interaction. He became a Professor at Tokyo Institute of Technology in 1997, and was given the title of Professor Emeritus in 2011. He is now serving as President of Toyota Technological Institute at Chicago (TTI-C). He has authored or coauthored over 990 published papers and books. He has received numerous awards from scientific societies and government organizations.
Eduard Golshtein received B.Sc., M.Sc. and Ph.D. from Ben-Gurion University, Beer-Sheva, Israel in 1994, 1995 and 1999 respectively. Since 1997 Eduard Golshtein has held research positions in a number of commercial companies and takes part in the development of products related to speech recognition, speaker verification, text to speech and speech coding technologies. His main research interests are DNN for speech recognition, speech recognition for low-resources embedded system, lexical stress and speech prosody, acoustic features for speech recognition, parallel computation for speech recognition.
Ron is the manager of the speech technologies group at IBM Haifa Research Lab. He received his B.Sc. and M.Sc. degrees in Electrical engineering from the Technion, Israel Institute of Technology, in 1990 and 1993, respectively. His area of expertise is speech processing, including speech synthesis, speech recognition, speaker recognition and speech coding. Ron joined IBM in 1993 and since then has led various speech processing research projects, and became the speech technologies group manager in 2006. Ron has published more than 20 papers and filed numerous patents.
He received his Ph.D. in Computer Science in The School of Computer Science and Engineering at The Hebrew University of Jerusalem in 2007. From 2007 to 2009 he was a post-doctoral researcher at IDIAP Research Institute in Switzerland. From 2009-2012 he was research assistant professor at University of Chicago (TTIC). Since 2012 he is a faculty member at the department of computer science at Bar-Ilan University. His research interests concerns both machine learning and computational study of human speech and language. In machine learning his research is focused on large margin and kernel methods, structured prediction, and learning through random perturbations. His research on speech and language concerns speech processing, speech recognition, acoustic phonetics, and pathological speech.
Itshak Lapidot received his B.Sc., M.Sc., and Ph.D. degrees from the Electrical and Computer Engineering Department, Ben-Gurion University, Beer-Sheva. He held a postdoctoral position at IDIAP Switzerland. Dr. Lapidot was previously a lecturer at the Electrical and Electronics Engineering Department at Sami Shamoon College of Engineering (SCE), in Beer-Sheva, Israel and served as a Researcher at the Laboratoire Informatique d'Avignon (LIA), University of Avignon in France. Recently, Dr. Lapidot assumed a teaching position with the Electrical Engineering Department at the Afeka Academic College of Engineering and joined the ACLP research team. Dr. Lapidot's primary research interests are speaker diarization and speaker verification.
For 10 years, James A. Larson, PhD, chaired the World Wide Web Consortium’s Voice Browser Working Group, which creates standard languages for developing speech applications. He is a member and contributor to the W3C’s Multimodal Interaction Working Group which creates standard languages for multimodal applications. Jim is co-chair of the SpeechTEK conferences held in New York. Jim is a columnist for Speech Technology Magazine and the author of VoiceXM: Introduction to Developing Speech Applications and books on user interfaces and database management systems. Jim is an adjunct professor at Portland State University and the Oregon Institute of Technology in Portland, Oregon, where he teaches courses in user interfaces, database management systems, and speech applications. He is the vice president of Larson Technical Services.
Chin-Hui Lee has accumulated 20 years of industry and 12 years of academia experience in speech processing. He is a Fellow of the IEEE and ISCA, and has published 400 papers and 30 patents. He won the Signal Processing Society's 2006 Technical Achievement Award for "Exceptional Contributions to the Field of Automatic Speech Recognition". Recently in 2012 he was awarded the ISCA Medal of Scientific Achievement for "pioneering and seminal contributions to the principles and practice of automatic speech and speaker recognition, including fundamental innovations in adaptive learning, discriminative training and utterance verification".
Received the B.Sc. and M.Sc. degrees in 1964 and 1967, respectively, from the Technion -Israel Institute of Technology, Haifa, Israel, and the PhD degree in 1971 from the University of Minnesota, Minneapolis, all in Electrical Engineering. He joined in 1972 the Technion, where he has been an Elron-Elbit Professor of Electrical Engineering until his retirement in October 2011, becoming a Professor Emeritus. During the period 1979 to 2001 he spent about 6 years, cumulatively, of sabbaticals and summer leaves at AT&T Bell Laboratories, Murray Hill, NJ, and AT&T Labs, Florham Park, NJ, conducting research in the areas of Speech and Image Communication. In 1975 he co-established the Signal and Image Processing Laboratory (SIPL), at the Technion, and has since served as its academic head. The lab is active in Image/Video and Speech/Audio Processing research and education. From 2006 to 2010 he served as the Director of the Center for Communication and Information Technologies - CCIT, at the EE Department, Technion. This center hosts the EE Dept. Industrial Liason Program (ILP). He was on the Editorial Board of the Journal of Visual Communication and Image representation, from 1999 to 2012, and during 2010 - 2011, he was on the Senior Editorial Board of IEEE Journal of Selected Topics in Signal processing. He is a recipient of the 2007 IBM Faculty Award, and a recipient of the 2011 Outstanding Achievement Award from his Alma Mater – the University of Minnesota. He became a Fellow of the IEEE in 1987, and since 2009 he is a Life Fellow of the IEEE. His main research interests are in Image, Video, Speech and Audio Coding; Speech and Image Enhancement; Text to Speech Synthesis; Hyperspectral Image Analysis; Data Embedding in Signals, and in Digital Signal Processing Techniques.
Joseph Mariani was the director of LIMSI, a French CNRS laboratory, from 1989 to 2000, and head of its "Human-Machine Communication” department, covering various modalities (spoken and written language processing, computer vision, computer graphics, gestural communication, Virtual and Augmented Reality, etc) and various approaches (Computer Science, Signal Processing, Linguistics, Cognitive Science, Human Factors, Social Sciences). He also chaired the Scientific Committee of the LIMSI VENISE transversal action on Virtual and Augmented Reality. He then became Director of the "Information and Communication Technologies" department at the French Ministry of Research from 2001 to 2006. In this framework, he managed several national programs, and in particular he launched the Techno-Langue and Techno-Vision actions, addressing technology development and assessment in those domains. He is now director of the Institute for Multilingual and Multimedia Information (IMMI), a joint International Laboratory involving LIMSI, Karlsruhe Institute of Technology (KIT) and RWTH Aachen (RWTH) (Germany), settled in 2008 in the framework of the Quaero national French program. On the international scene, he coordinated the FRANCIL Network of the Francophone University Association (AUF), chaired the European Speech Communication Association, now International Speech Communication Association (ISCA) and the European Language Resources Association (ELRA), participated in the Board of the European Language & Speech Network of Excellence (ELSNET) and in the Steering Committee of FLaReNet (the Fostering Language Resources Network), and was the general convener of the Cocosda committee. He presently sits in the Management Board and Council of META-NET (the Multilingual Europe Technological Alliance Network of Excellence). He is in the Editorial Committees of the “Language Resources and Evaluation” and “Speech Technology” Journals, and of the “Text, speech and language” book series. He participated in the Editorial Committees of the “Speech Communication” Journal and of the "Survey of the State-of-the-Art in Human Language Technology". He edited the “Spoken Language Processing” monograph. His research activities concern Human-Machine Communication and Human Language Technologies.
Asunción Moreno received PhD in Telecommunication Engineering in 1987 from Universitat Politècnica de Catalunya (UPC), Spain. She is Full Professor at the same University. From 2004 to 2006 she was director of the Technologies and Applications of Language and Speech (TALP) research center, and from 2009 to 2013 she was Head of the Signal Theory and Communications Department. Her current research interest is Speech recognition, Speech Synthesis and Spoken Databases. She published more than 100 international papers in journals and conferences. She participated in several EU and national funded projects.
Hermann Ney is a full professor of computer science at RWTH Aachen University, Germany. His main research interests lie in the area of statistical methods for pattern recognition and human language technology and their specific applications to speech recognition, machine translation and handwriting recognition. In particular, he has worked on dynamic programming and discriminative training for speech recognition, on language modelling and on phrase-based approaches to machine translation. His work has resulted in more than 600 conference and journal papers (h-index 73, estimated using Google scholar). He is a fellow of both IEEE and ISCA. In 2005, he was the recipient of the Technical Achievement Award of the IEEE Signal Processing Society. In 2010, he was awarded a senior DIGITEO chair at LIMIS/CNRS in Paris, France. In 2013, he received the award of honour of the International Association for Machine Translation.
Björn W. Schuller received his diploma in 1999, his doctoral degree in 2006, and was entitled Adjunct Teaching Professor in electrical engineering and information technology from TUM in Munich/Germany. At present, he is Full Professor and head of the Chair of Complex and Intelligent Systems at the University of Passau/Germany and a Senior Lecturer (Associate Professor) in Machine Learning in the Department of Computing at Imperial College London/UK (since 2013), and the co-founding CEO of audEERING UG (limited). In 2013 he was invited as a permanent Visiting Professor at the Harbin Institute of Technology/China and a Visiting Professor at the University of Geneva/Switzerland remaining an appointed associate. Former affiliations and visits include TUM/Germany from 2006 to 2014, Joanneum Research in Graz/Austria in 2012, CNRS-LIMSI Spoken Language Processing Group in Orsay/France in 2009-2010. His >400 articles and papers passed 7500 citations with an h-index of 42. Currently, he is Editor in Chief of the IEEE Transactions on Affective Computing, member of the IEEE SLTC, and President of the AAAC.
Professor Ilan D. Shallom is an expert in the field of speech signal processing, acting as CTO Speech Recognition in Audiocodes, which is a VoIP technology based company. He is involved both in speech compression and speech recognition technologies. He is a visiting professor in the electrical and computer engineering department in Ben Gurion University, Israel. He is the head of a research lab in the Speech processing arena, and a lecturer in several undergraduate and graduate courses.
Dr. Silber-Varod is a linguist specializing in speech prosody and speech technologies. Alumnus of Tel-Aviv University, and currently a Research Fellow at the Research Center for Innovation in Learning Technologies, The Open University of Israel. Dr. Silber-Varod is also a consultant and content developer for the language department at The Center for Educational Technology (CET).
Richard M. Stern has been a member of the faculty of Carnegie Mellon University since 1977, where he is currently a Professor in the Department of Electrical and Computer Engineering, the Department of Computer Science, and the Language Technologies Institute, and a Lecturer in the School of Music. Much of Dr. Stern's current research is concerned with the development of techniques that improve the robustness of automatic speech recognition systems and related technologies. He has also worked extensively in psychoacoustics, where he is best known for theoretical work in binaural perception. Dr. Stern is a Fellow of the IEEE, the Acoustical Society of America, and the International Speech Communication Association (ISCA). He was the ISCA 2008-2009 Distinguished Lecturer, and he served as the General Chair of Interspeech 2006.
Dr. Eli Tzirkel is a staff researcher at the General Motors. He received his B.Sc. degree in Electrical and Computer Engineering from Ben-Gurion University, Israel, and the Ph.D. degree in Information Engineering from Cambridge University. Eli was research group manager at Canon Research Europe, A director at Phonetic Systems, and CTO/VP-R&D at RADLIVE. Eli has over 20 years’ experience of research and productization of speech, audio, natural language and information retrieval technologies.
Dr. Tzur Vaich has held several research positions in number of commercial and start-up companies, taking part in the development of speech recognition solutions and was a lecturer at Ben-Gurion University. Since 2007, he has been working at SpinVox UK (a Nuance Company) and currently holds a position of principal research scientist.Tzur received a Ph.D. degree from the Electrical and Computer Engineering Department at Ben-Gurion University, Beer-Sheva, Israel.
After 37 years at AT&T Labs Research, Jay Wilpon recently joined the executive team at Interactions Corp as SVP, Natural Language Research focusing on creating technologies that enable the intelligent "interface of things". Joining him is the entire AT&T Watson speech and language team as part of a strategic acquisition by Interactions Corporation. Beginning his career in 1977, Jay is one of the world’s pioneers and chief evangelist, for speech and language technologies and services. Jay has authored over 150 publications and patents. He has been a leading innovator for a number of advanced voice enabled services throughout his career, including AT&T’s How May I Help You – the first nationwide deployment of a true human-like spoken language understanding based service. His work led to the first nationwide deployments of both speech recognition and spoken language understanding technologies. Jay and his team have a lot of fun addressing the key challenges in speech, natural language processing and multimodal dialog systems required to advance core science and innovative products and services, such as cloud APIs, multimodal virtual assistants, and unstructured analytics for business intelligence – all enabling the “interface of things”. Jay was awarded the distinguished honor of IEEE Fellow for his leadership in the development of automatic speech recognition algorithms. For pioneering leadership in the creation and deployment of speech recognition-based services in the telephone network, Jay was also awarded the honor of AT&T Fellow. Altogether, the service innovations that Jay has been associated with have produced revenue and cost savings of billions of dollars for its customers.