Winslow Burleson headshot

Winslow Burleson

Associate Professor
Affiliated Associate Professor, Tandon School of Engineering
Affiliated Associate Professor, Courant Institute
Affiliated Associate Professor, College of Global Public Health

1 212 998 5376

433 First Avenue
Room 740
New York, NY 10010
United States

expand all

collapse all

Professional overview

Winslow Burleson joined the New York University College of Nursing as Associate Professor in September 2014. Win leads the NYU-X Lab (www.nyu-x.org) advancing transformations in Health, Technology, Education, and Innovation. He is the Principal Investigator for the 5 year, $4 million NSF Major Research Infrastructure project, "Development of Experiential Supercomputing: A Transdisciplinary Research and Innovation Holodeck." Previously, Win was an Assistant Professor at the School of Computing, Informatics, and Decision Systems Engineering, in the Ira A. Fulton School of Engineering, and Senior Sustainability Scientist at the Julie Ann Wrigley Global Institute of Sustainability at Arizona State University (ASU). He was recently selected as a Fulbright Specialist Roster Candidate in Computer Science. As a social inventor, he led Motivational Environments, a 20-person research group advancing human-centered technology and design strategy to improve quality of life through increased creativity, learning, and health. Win was recognized by ASU as an outstanding professor and by the National Academy of Engineering (NAE) as “one of the nation’s leading engineering researchers and educators” for service and leadership roles in advancing world-class transdisciplinary research and education for minority and under-served learners. Win has raised over $11 million in collaborative funding and received the best paper award at the 2009 International Conference on Artificial Intelligence, the field’s top conference, showing Affective Learning Companions’ ability to have large-scale societal impact on thousands of students, by bringing cyber-enabled learning research into classroom settings. Win has been awarded 10 patents, four inventor and innovator awards from IBM Research, two Time Magazine Awards for the top 10 and top 50 inventions of the year, authored over 100 scientific publications, exhibited at the Pompidou Centre, and performed in Carnegie Hall.

Honors and awards

White House Fellow Regional Finalist (New York, NY) (2015)



Professional membership

Association for Computing Machinery



An assistive technology system that provides personalized dressing support for people living with dementia: Capability study

Burleson, W., Lozano, C., Ravishankar, V., Lee, J., & Mahoney, D. (2018). Journal of Medical Internet Research, 20(5). 10.2196/medinform.5587
Background: Individuals living with advancing stages of dementia (persons with dementia, PWDs) or other cognitive disorders do not have the luxury of remembering how to perform basic day-to-day activities, which in turn makes them increasingly dependent on the assistance of caregivers. Dressing is one of the most common and stressful activities provided by caregivers because of its complexity and privacy challenges posed during the process. Objective: In preparation for in-home trials with PWDs, the aim of this study was to develop and evaluate a prototype intelligent system, the DRESS prototype, to assess its ability to provide automated assistance with dressing that can afford independence and privacy to individual PWDs and potentially provide additional freedom to their caregivers (family members and professionals). Methods: This laboratory study evaluated the DRESS prototype's capacity to detect dressing events. These events were engaged in by 11 healthy participants simulating common correct and incorrect dressing scenarios. The events ranged from donning a shirt and pants inside out or backwards to partial dressing-typical issues that challenge a PWD and their caregivers. Results: A set of expected detections for correct dressing was prepared via video analysis of all participants' dressing behaviors. In the initial phases of donning either shirts or pants, the DRESS prototype missed only 4 out of 388 expected detections. The prototype's ability to recognize other missing detections varied across conditions. There were also some unexpected detections such as detection of the inside of a shirt as it was being put on. Throughout the study, detection of dressing events was adversely affected by the relatively smaller effective size of the markers at greater distances. Although the DRESS prototype incorrectly identified 10 of 22 cases for shirts, the prototype preformed significantly better for pants, incorrectly identifying only 5 of 22 cases. Further analyses identified opportunities to improve the DRESS prototype's reliability, including increasing the size of markers, minimizing garment folding or occlusions, and optimal positioning of participants with respect to the DRESS prototype. Conclusions: This study demonstrates the ability to detect clothing orientation and position and infer current state of dressing using a combination of sensors, intelligent software, and barcode tracking. With improvements identified by this study, the DRESS prototype has the potential to provide a viable option to provide automated dressing support to assist PWDs in maintaining their independence and privacy, while potentially providing their caregivers with the much-needed respite.

Active Learning Environments with Robotic Tangibles: Children's Physical and Virtual Spatial Programming Experiences

Burleson, W., Harlow, D. B., Nilsen, K. J., Perlin, K., Freed, N., Jensen, C., Lahey, B., Lu, P., & Muldner, K. (2017). IEEE Transactions on Learning Technologies. 10.1109/TLT.2017.2724031
Active Learning Environments with Robotic Tangibles (ALERT) and Robopad, an analogous on-screen virtual spatial programming environment for educational Human Robot Interaction (HRI), have been developed. Evaluations of these in the context of free play and open-ended learning activities show that both systems afford opportunities for young children to engage in spatial programming, creating improvisational and sequential programs that mediate interactions between the environment, robots and humans in responsive and creative ways. These systems demonstrate innovative opportunities for advancing mixed reality spatial programming activities as a form of HRI that fosters engaging seamless cyberlearning experiences, across formal and informal environments.

Affect Measurement

Gonzalez-Sanchez, J., Baydogan, M., Chavez-Echeagaray, M. E., Atkinson, R. K., & Burleson, W. (2017). 10.1016/B978-0-12-801851-4.00011-2
Affect signals what humans care about and what matters to them. By providing computers with the capability to measure affect, researchers aspire to narrow the communication gap between the emotional human and the emotionally detached computer, with the ultimate aim of enhancing human-computer interactions. This chapter explores the multidisciplinary foundations of affective state measurement as a multimodal process. Specifically, it: (1) describes popular sensing technologies, including brain-computer interfaces, face-based emotion recognition systems, eye-tracking systems, physiological sensors, body language recognition, and text-based language processing; (2) explores the data gathered from each technology and its key characteristics; (3) outlines the pros and cons of each technology; (4) examines sampling, filtering, and multimodal affective data integration methodologies; and (5) presents the tools and algorithms used to analyze affective data off-line, seeking to make inferences regarding the meaning of that data and to correlate it with stimuli.

Can a Non-Cognitive Learning Companion Increase the Effectiveness of a Meta-Cognitive Learning Strategy?

Vanlehn, K., Zhang, L., Burleson, W., Girard, S., & Hidago-Pontet, Y. (2017). IEEE Transactions on Learning Technologies, 10(3), 277-289. 10.1109/TLT.2016.2594775
This project aimed to improve students' learning and task performance using a non-cognitive learning companion in the context of both a tutor and a meta-tutor. The tutor taught students how to construct models of dynamic systems and the meta-tutor taught students a learning strategy. The non-cognitive learning companion was designed to increase students' effort and persistence in using the learning strategy. It decided when to intervene and what to say using both log data and affective state monitoring via a facial expression camera and a posture sensor. Experiments with high school students showed that the non-cognitive learning companion increased students' learning and performance. However, it had no effect on performance during a transfer phase in which the learning companion, meta-tutor, and tutor were all absent. The transfer phase null effect must be interpreted with caution due to low power, a possible floor effect, and other issues.

Health Technology-Enabled Interventions for Adherence Support and Retention in Care Among US HIV-Infected Adolescents and Young Adults: An Integrative Review

Dunn-Navarra, A.-M., Gwadz, M., Whittemore, R., Bakken, S. R., Cleland, C. M., Burleson, W., Jacobs, S. K., & D’Eramo Melkus, G. (2017). AIDS and Behavior, 1-18. 10.1007/s10461-017-1867-6
The objective of this integrative review was to describe current US trends for health technology-enabled adherence interventions among behaviorally HIV-infected youth (ages 13–29 years), and present the feasibility and efficacy of identified interventions. A comprehensive search was executed across five electronic databases (January 2005–March 2016). Of the 1911 identified studies, nine met the inclusion criteria of quantitative or mixed methods design, technology-enabled adherence and or retention intervention for US HIV-infected youth. The majority were small pilots. Intervention dose varied between studies applying similar technology platforms with more than half not informed by a theoretical framework. Retention in care was not a reported outcome, and operationalization of adherence was heterogeneous across studies. Despite these limitations, synthesized findings from this review demonstrate feasibility of computer-based interventions, and initial efficacy of SMS texting for adherence support among HIV-infected youth. Moving forward, there is a pressing need for the expansion of this evidence base.

Addressing affective states with empathy and growth mindset

Arroyo, I., Schultz, S., Wixon, N., Muldner, K., Burleson, W., & Woolf, B. P. (2016). Unknown Journal, 1618.
We present results of a randomized controlled study that compared different types of affective support messages delivered by pedagogical agents. Results suggest that using a character that is empathic and emphasizes the malleability of intelligence and the importance of effort provides useful results in student learning, while reducing boredom and anxiety. Emphasizing success and failure ("That is correct/wrong") appears to be detrimental to learning and interest and promotes anxiety. We examine a variety of student affective, cognitive and engagement outcomes in an intelligent tutoring system for mathematics.

Optimists' Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041)

Burleson, W., & Lewis, A. (2016). International Journal of Artificial Intelligence in Education, 26(2), 796-808. 10.1007/s40593-016-0096-x
This essay imagines the role that artificial intelligence innovations play in the integrated living, learning and research environments of 2041. Here, in 2041, in the context of increasingly complex wicked challenges, whose solutions by their very nature continue to evade even the most capable experts, society and technology have co-evolved to embrace cyberlearning as an essential tool for envisioning and refining utopias-non-existent societies described in considerable detail. Our society appreciates that evolving these utopias is critical to creating and resolving wicked challenges and to better understanding how to create a world in which we are actively "learning to be" - deeply engaged in intrinsically motivating experiences that empower each of us to reach our full potential. Since 2015, Artificial Intelligence in Education (AIED) has transitioned from what was primarily a research endeavour, with educational impact involving millions of user/learners, to serving, now, as a core contributor to democratizing learning (Dewey 2004) and active citizenship for all (billions of learners throughout their lives). An expansive experiential super computing cyberlearning environment, we affectionately call the "Holodeck," supports transdisciplinary collaboration and integrated education, research, and innovation, providing a networked software/hardware infrastructure that synthesizes visual, audio, physical, social, and societal components. The Holodeck's large-scale integration of learning, research, and innovation, through real-world problem solving and teaching others what you have learned, effectively creates a global meritocratic network with the potential to resolve society's wicked challenges while empowering every citizen to realize her or his full potential.

Smart home strategies for user-centered functional assessment of older adults

Ravishankar, V. K., Burleson, W., & Mahoney, D. (2015). International Journal of Automation and Smart Technology, 5(4), 233-242. 10.5875/ausmt.v5i4.952
Successful aging, independence and capacity for aging in place involves the maintenance and preservation of individuals' physical, mental and social well-being. Elderly people need to maintain the capacity to perform both activities of daily living (ADL) and instrumental activities of daily living (IADL). Advances in Smart Home technologies are increasingly able to provide embedded assessments of an individual's functional ability in his/her home on a momentto- moment, daily, and longitudinal basis. To date, in-situ functional assessment systems and research have focused to a greater extent on the advancement of technologies rather than the multi-faceted needs and experiences of users, however the success of any technology depends more on the users than the technology itself. This paper presents strategies for user-centric approaches to identify the technical and design challenges of developing, deploying, and using functional assessment systems in homes occupied by senior citizens. Case studies involved 4 healthy older adults (aged 65+), and examined the home deployment of smart home systems and interfaces aimed at assessment of a combination of ADL and IADL activities. Pre- and post-activity interviews were used to better understand issues related to desire, privacy, technological acceptance, suitability, and need fulfillment/support. The results inform strategies for usercentered functional assessment and assistive technology design and implementation, providing information capture, analysis, and delivery of in-home functional assessment that has the potential to support aging in place.

Utilizing sensor data to model students' creativity in a digital environment

Muldner, K., & Burleson, W. (2015). Computers in Human Behavior, 42, 127-137. 10.1016/j.chb.2013.10.060
While creativity is essential for developing students' broad expertise in Science, Technology, Engineering, and Math (STEM) fields, many students struggle with various aspects of being creative. Digital technologies have the unique opportunity to support the creative process by (1) recognizing elements of students' creativity, such as when creativity is lacking (modeling step), and (2) providing tailored scaffolding based on that information (intervention step). However, to date little work exists on either of these aspects. Here, we focus on the modeling step. Specifically, we explore the utility of various sensing devices, including an eye tracker, a skin conductance bracelet, and an EEG sensor, for modeling creativity during an educational activity, namely geometry proof generation. We found reliable differences in sensor features characterizing low vs. high creativity students. We then applied machine learning to build classifiers that achieved good accuracy in distinguishing these two student groups, providing evidence that sensor features are valuable for modeling creativity.

Development of a responsive emotive sensing system (DRESS) to aid persons with dementia dress independently

Mahoney, D. F., Burleson, W., Lozano, C., Ravishankar, V., & Mahoney, E. (2014). Gerontechnology, 13(2). 10.4017/gt.2014.
Purpose: The goal of this study is to develop a 'smart dresser' for in-home use by people with moderate memory loss. This device is designed to provide individualized audio and visual task prompting and enables people with moderate memory loss to dress while giving a respite to their caregivers. Prior research indicates that there has been insufficient attention to the stressors associated with dementia-related dressing issues1, which include stigmatizing clothing2 and wearable technology challenges3. Other researchers have suggested the need for innovative gerontechnology initiatives4. The DRESS system uniquely combines interactive contextaware/skeletal movement-, wrist-affective emotion sensing and fiducial fabric tag components to assess and respond to users in real time. Method: Mixed methods approach. To critique DRESS design and provide usability recommendations, qualitative inductive focus group research was done with 25 family caregivers of persons who displayed dressing difficulties. System development followed an iterative path incorporating caregivers' feedback. Quantitative technical feasibility testing occurred in a controlled lab with ten actors portraying nine different standardized dressing scenarios. Results & Discussion: Caregivers validated the need for tangible dressing assistance to reduce frustration from time spent in repetitive cueing and from struggles over dressing5. They contributed six changes that influenced the smart dresser's conceptual stage prototype development, most notably adding a dresser top iPad to mimic a familiar 'TV screen' for the audio and visual cueing (Figure1). DRESS demonstrated reliable operations, but the accuracy of clothing identification ranged from 16% for the most difficult inside out pants layout to 100% for shirts. Adjustments were made to Kinect and fiducial threshold values, which increased the pants orientation accuracy rate to 81%. The findings demonstrate proof of feasibility and validation of the conceptual development phase. Beta stage development will follow this initial work.