In the creation of autonomous virtual characters, two levels of autonomy are common. They are often called motion synthesis (low-level autonomy) and behavior synthesis (high-level autonomy), where an action (i.e. motion) achieves a short-term goal and a behavior is a sequence of actions that achieves a long-term goal. There exists a rich literature addressing many aspects of this general problem (and it is discussed in the full paper). In this paper we present a novel technique for behavior (high-level) autonomy and utilize existing motion synthesis techniques. Creating an autonomous virtual character with behavior synthesis abilities frequently includes three stages: forming a model used to generate decisions, running the model to select a behavior to perform given the conditions in the environment, and then carrying out the chosen behavior (translating it into low-level synthesized or explicit motions). For this process to be useful it must efficiently produce realistic behaviors. We address both of these requirements with a novel technique for creating cognitive models. The technique uses programming-by-demonstration to address the first requirement, and uses data-driven behavior synthesis to address the second. Demonstrated human behavior is recorded as sequences of abstract actions, the sequences are segmented and organized into a searchable data structure, and then behavior segments are selected by determining how well they accomplish the character’s long-term goal (see Fig. 1). The resulting model allows a character to engage in a very large variety of high-level behaviors.