June 2017 marks the end of HARC’s first year. In that time, we have made significant progress towards our shared vision of advancing humanity through technological and educational innovation, especially in the areas of programming languages, simulation systems, physical/virtual user interfaces, computer-mediated student-teacher interaction, and virtual reality.
Here are year-end reports from HARC’s six Principal Investigators.
The eleVR project uses virtual and augmented reality technologies, combined with up-to-date research on embodied cognition, to gain insight wide-enough adoption for experiential data that goes beyond limited in-house studies. It allows us to quickly prototype full-scale environments that engage the entire body. We’ve used VR to see physics in real time, walk through hyperbolic space, experience four dimensional objects, and create an interactive museum of mathematics.
To understand what’s happening when hands-on learning is effective, we need basic research into the mechanisms by which we turn our body’s experience into conceptual knowledge, and to design better ways to use our bodies as thinking tools. We’re bringing together techniques from a diverse array of fields to bring light to human capabilities that have been ignored by technology thus far.
The Lively project builds on Alan Kay’s original idea of a DynaBook; a computer companion for pondering, creating, simulating and communicating. A live object system is particularly well suited to education and simulation, as it enables learners not just to play with complex systems, but to open the hood, see what makes them tick, and make changes while the system is still in motion — transforming users into authors, and players into learners.
We approach this goal by providing a dynamic, live and user-programable development environment for the Web. The lively.next platform implements the different layers of abstraction such as a module system, object serialization, graphical components, etc. in a way that integrates with existing Web content, yet provides more flexibility for user interactions and creations than typical web pages or applications.
Based on this foundation, we are focused on two user-facing projects: a live object presentation system (Prezo), and a live object curriculum project (Gadget). The Prezo project brings a live object model to presentation style software (e.g., PowerPoint), redrawing the boundaries between composition and presentation. Presentations become living systems that learners manipulate, interrogate, and rewrite. We believe that this will transform the process of authorship and imbue learners with greater agency.
The Gadget project seeks to transform players into computational thinkers and architects of live object worlds, by combining the pleasure and power of live control with design ideas from popular games such as Minecraft. A good curriculum has much in common with modern game design: discovery of components and rules, experiential arcs that guide learning, the thrill of mastery, and authentic learning through self-directed activity.
To impact the greatest number of learners, educators, and users, Lively exists on the world wide web, for free.
Live exploration of runtime behavior
Draw and program
Computers have revolutionized the way we work, create, and even the way we think, but we believe this revolution has only just begun. All too often, computers are demanding and unhelpful assistants rather than fully effective partners. Our research aims to make them more helpful, useful, and expressive — for experts and learners alike.
We are interested in applying our research in the educational space, because this allows us to define both the tools, and — through the curriculum — the context in which they are used. We have already had some success with this approach. The Ohm Editor, for example, is a web-based environment for experimenting with the design and implementation of programming languages, centered around a novel visualization that gives learners powerful feedback as they work. It is the basis of a two-course sequence on programming languages that we created at UCLA, and is currently being used in a compilers course at Loyola Marymount University.
We are also working on a new programming language and environment that enables programmers to better see and understand the execution of their programs. We are creating an introductory programming course around this tool, and expect that it will help students build a deeper understanding of programming and develop powerful ways of thinking about computation.
GP is a new, general-purpose blocks language that is powerful yet easy to learn. GP users can write programs that generate graphics, manipulate images and sounds, analyze data, simulate scientific ideas, use cloud data, interact with the physical world, and more.
Experienced GP users can create and share extensions that add new blocks and facilities to GP. For example, a teacher might create a library of blocks for manipulating sound, including a live visualization of sound from the computer microphone, then share that extension with their students. GP extensions are written in the GP blocks language, so extension writers do not need to install or learn any other programming language.
GP grew out of experience with Scratch, a blocks-based visual programming language created at the MIT Media Lab and used by over seventeen million children around the world. Scratch is now one of 20 most popular programming languages according to the TIOBE index. John Maloney was the lead developer for Scratch over its first eleven years, and is incorporating lessons from Scratch into the design of GP.
GP is a natural next step for those who have used Scratch. In education, GP is ideal for grades 8-12, introductory college-level computer science classes, or adding a hands-on computing component to courses in science, math, and the arts.
GP is also great for anyone who wants to make their own app but doesn’t want to deal with complex languages and tools designed for professional software developers. A single click exports a GP program as a stand-alone executable or uploads it to the web.
Students see the results of the program as it runs. Image manipulation helps students understand programming concepts, media representation, and human perception (e.g. human eyes have receptors for red, green, and blue). Explore the latest GP site here.
We are working on the future of computer-based tutoring systems for children. State of the art tutoring systems are not able to accurately assess learner ability levels and understanding, or perceive situations in which students are operating under misconceptions. As a result, they force the learner to follow pre-determined steps mechanically. On the other hand, an expert human teacher provides a richer learning environment. The teacher provides suggestions, gives encouragement, and, most importantly, proactively asks questions that make the learner think. Also, teaching sessions often involve physical interaction such as pointing at the computer screen with a finger, making hand gestures, etc. We would like to build tutoring systems that leverage these interactions so that we can reach a larger number of children.
We recoreded a few one-on-one teaching sessions of a visual programming language, and analyzed interaction between the teacher and learner. We learned the teaching techniques, when and how the teacher gives an answer, not to give an answer but ask the same question in different ways, etc. We also created a Wizard-of-Oz style tutoring system for further experiments.
In another project, we are creating a prototype of a programming system called Shadama. Shadama is designed for writing programs that create, control and visualize large numbers of objects. The primary goal of the language is to facilitate the writing of scientific simulations by students at the high school level.