Abstract

 

Two Applications of Neural Network Technology:User Modeling and Prediction

Larry Manevitz, University of Haifa

 

We will try to indicate some of the scope of current technology by briefly sketching two recent applications of neural networks. This is with an eye towards future cooperation within the Trento-Haifa agreement.

1. User Modeling:

We show how a neural network can be used to learn a filter from positive information only and illustrate it with joint results with Malik Yousef of the U. of Pennsylvania . This can be applied to produce an adaptive "user model" that can filter, e.g. information from the internet.

This method has been shown to be superior to other one class methods (including one class Support Vector Machines, Naive Bayes and so on).

2. Prediction

In joint work with D. Givoli of the Technion, and A. Bitar of IBM, Israel , we show how to use a "prediction neural network" to anticipate gradients in wave functions. This allows leverage on numerical techniques, which can produce, in the Finite Element Method dramatic increases in accuracy from the same computational load by using adaptive meshing. This technique has been submitted for a U.S. patent.

(This work is the latest in a series of collaborations with D. Givoli on improving the FEM by the use of soft computing techniques.)

 


 

Emotional Intelligence: Knowns and Unknowns

Moshe Zeidner, University of Haifa

 

For many years, researchers have wondered what lies beyond intelligence; that is, whether there are social or personal abilities that may predict real-world success over and above conventional ability measures. Recently, the construct of emotional intelligence (EI) has emerged as one of the most high profile ability constructs of this kind (Goleman, 1998; Matthews, Zeidner , & Roberts, 2002). EI may be broadly defined as a set of competencies for identifying, processing, and managing emotion, that support insight into self and others, and more effective coping with the demands of everyday life. EI research has prospered, in part, due to the increasing personal importance of intelligence for people in modern society, with EI commonly claimed to predict important educational and occupational criteria above and beyond that predicted by general intellectual ability. However, despite a high level of interest in EI among both researchers and practitioners, the science of EI is in its infancy, and many key questions remain unanswered.

 

It is commonly claimed that tests for EI are predictive of important educational and occupational criteria, beyond that proportion of variance which general intellectual ability predicts. In fact, it has been suggested that EI may be more important than IQ in predicting outstanding performance in upper stratums of leadership (Goleman, 1998). As one group of commentators has argued: “If the driving force of intelligence in twentieth century business has been IQ, then … in the dawning twenty-first century it will be EQ” (Cooper & Sawaf, 1997, p. XXVII). These large claims have not been substantiated by rigorous empirical research (Matthews et al., 2002). Nevertheless, there is accumulating evidence that tests for EI measure something more than established ability and personality constructs, and may predict various criteria relating to social functioning and adaptation (Mayer, Salovey, & Caruso, 2000).

 

However, there are various obstacles to realizing the potential ben efits of studying EI, as noted by Matthews et al. (2002), in their recent review of the field. First, there is no agreed definition or conceptualization of EI. It is unclear whether EI is cognitive or non-cognitive, whether it refers to explicit or implicit knowledge of emotion, and whether it refers to a basic aptitude or to some adaptation to a specific social and cultural milieu ( Zeidner , Matthews, & Roberts, 2001). Second, it is unclear how EI may be best measured. Both objective tests and self-report questionnaires have been developed, but scores on different instruments fail to converge particularly well. These measures also relate differently to other individual difference constructs. Objective tests, notably those developed by Mayer et al. (2000), are moderately correlated with both general intelligence and personality dimensions. Self-report scales are very highly confounded with existing personality constructs but are independent of conventional intelligence (e.g., Dawda & Hart, 2000). Third, the practical utility of tests for EI is limited by these conceptual and psychometric deficiencies. There are some indications of predictive validity (e.g., Mayer et al., 2000), but as yet there is too little validity for the tests to be used with confidence in making real-world decisions, such as hiring a job applicant on the basis of their score on a test of EI ( Zeidner , Matthews, & Roberts, in press). Intervention programs that seek to raise EI typically lack a clear theoretical and methodological basis, and often employ a ragbag of techniques, whose psychological effects are unclear ( Zeidner , Roberts, & Matthews, 2002).

 

The time is right for a reevaluation of the current status of EI, leading to a systematic approach towards new research. Thus, the purpose of this talk is to attempt to summarize what we know about EI with respect to theory, assessment, and applications and point out directions for future research. The talk will aim to translate a sharper theoretical focus into practical recommendations in occupational, educational, military, computational, and other contexts, to improve the utility of future assessment and intervention efforts.

 


 

Learning and Exploiting Relative Weaknesses of Opponent Agents

Shaul Markovitch, Technion

 

Acquiring an accurate model of a complex opponent strategy may be computationally infeasible.

We introduce a methodology for simplify the task by inducing only one aspect of the opponent strategy - its weakness.

We define weakness as the set of states where the opponent's relative performance is weak.  The model is inferred using traditional induction techniques and is combined with the agent's strategy to push the opponent towards positions estimated to belong

to its weakness.  The algorithm was tested on two game domains and proved to be effective. This is a joint work with Ronit Reger.

 


 

Computer Chess: state of the art

Shay Bushinsky, University of Haifa

 

Shay Bushinsky, the co-author of Deep Junior, a World Computer Chess Champion, will deliver a lecture about Deep Junior's recent match versus Garry Kasparov.  The match was the climax of the 10 year Junior Project during which it won the world title three times. The recent results between Deep Junior and Kasparov as well as between Deep Fritz and Kasparov & Vladimir Kramnik will be examined. These matches shed further light on the current strengths and weaknesses of the best computer programs. In particular, the lecture will amplify classical anti-computer chess, a theory that evolved over recent years out of human experience gained against AI and the counter methods employed by the programmers.

 


 

Reasoning About Time -- Recent Research on Interval and Tolerance Graphs

Martin Charles Golumbic, University of Haifa

 

Reasoning about time is essential for applications in artificial intelligence and in many other disciplines. Given certain explicit relationships between a set of events, we would like to have the ability to infer additional relationships which are implicit in those given. For example, the transitivity of ``before'' and ``contains'' may allow us to infer information regarding the sequence of events. Such inferences are essential in story understanding, planning and causal reasoning.

 

There are a great number of practical problems in which one is interested in constructing a time line where each particular event or phenomenon corresponds to an interval representing its duration. These include seriation in archeology, behavioral psychology, temporal reasoning, scheduling, and combinatorics. Other applications arise in non-temporal contexts, for example, in molecular biology, where arrangement of DNA segments along a linear DNA chain involves similar problems.

Events are represented by abstract time points and time intervals, and we process and deduce ``relationships'' between them, such as pairs intersecting each other, one preceding, following or containing another, etc. The techniques used in the qualitative approach to temporal reasoning often involve combinatorial and graph theoretic methods. In particular, interval graphs, interval algrbras, tolerance graphs and orders are often used to model temporal problems.

 

Tolerance graphs were introduced by [Golumbic and Monma, 1982] to generalize some of the applications associated with interval graphs The motivation was the need to solve scheduling problems in which resources such as rooms, vehicles, support personnel, etc. may be needed on an exclusive basis, but where a measure of flexibility or tolerance would be allowed for sharing or relinquishing the resource when total exclusivity prevented a solution. Since then, properties of this model and quite a number of variations of it, continue to be an interesting and active area of investigation.

 


 

Cooperation of Self-Interested Agents

Sarit Kraus, Bar-Ilan University

 

Self-interested computer agents operating in heterogeneous groups that include people must take human behavior into account in their decision-making. This talk describes a new multi-player game, Colored Trails (CT), which may be played by people, computers and heterogeneous groups. CT was designed to enable investigation of properties of decision-making strategies in multi-agent situations of varying complexity. The paper presents the results of an initial series of experiments of CT games in which agents' choices affected not only their own outcomes but also the outcomes of other agents and in which the information players had about one another was varied. It compares the behavior of people with that of computer agents deploying a variety of decision-making strategies. The results align with behavioral economics studies in showing that people cooperate when they play and that factors of social dependency affect their levels of cooperation.

 

Preliminary results indicate that people design agents to play strategies closer to game-theory predictions, yielding lower utility. Additional experiments show that such agents perform worse than agents designed to make choices that resemble human cooperative behavior.

his is a joint work with Barbara Grosz, Shavit Talman, Boaz Stosseb and Moti Havlin.

 


 

A Sub-quadratic Genome Alignment Algorithm

Gad M. Landau, University of Haifa

 

The rapid progress in large-scale DNA sequencing opens a new level of computational challenges involved in storing, organizing and analyzing the wealth of biological information. One of the most interesting new fields that the availability of the complete genomes has created is that of genome comparison (the genome is all of the DNA sequence passed from one generation to the next). Comparing complete genomes can give deep insights about the relationship between organisms, as well as shedding light on the function of specific genes in each single genome. The challenge of comparing complete genomes necessitates the creation of additional, more efficient computational tools.

 

One of the most common problems in biological comparative analysis is that of aligning two long bio-sequences in order to measure their similarity. The alignment is classically based on the transformation of one sequence into the other via operations of substitutions, insertions, and deletions (indels). Their costs are given by a scoring matrix.

 

The classical algorithm for computing the similarity between two sequences uses a dynamic programming matrix, and compares two strings of size n in O(n^2) time. We address the challenge of computing the similarity of two strings in sub-quadratic time, for metrics which use a scoring matrix of unrestricted weights. Our algorithm applies to both local and global similarity computations.

 

The speed-up is achieved by dividing the dynamic programming matrix into variable sized blocks, as induced by Lempel-Ziv parsing of both strings, and utilizing the inherent periodic nature of both strings. This leads to an O(n^2 / \log n) algorithm for an input of constant alphabet size. For most texts, the time complexity is actually

O((h n^2) / log n) where h < 1 is the entropy of the text.

 


 

Usability and accessibility: Design for universal accessibility based on robust metaphors, modular structures and intuitive interfaces

Tamar Weiss, University of Haifa

 

One of the major limitations associated with the design of advanced, novel technologies are difficulties associated with their usability, i.e., the ease with which potential users can operate them ( Kantowitz & Sorkin, 1983; Nielsen, 1994) . The human factors literature demonstrates that the interface between a particular technology and a given user may be characterized as "usable" only if it is effective, efficient and satisfying; technologies that cannot be used accurately or completely, that entail the expenditure of inordinate effort, or that do not generate sufficient comfort or enjoyment will be underused or not used at all ( Norman, 1998) .

 

Usability is a particularly demanding design challenge in the case of technologies that are intended to fulfill the needs of diverse segments of the populations such as those in the present research initiative that aim to enhance the appreciation of cultural heritage via interactive, educational and entertaining applications. Some sources of diversity within the population including variables such as age, gender, language, and cultural background are occasionally, albeit insufficiently, considered by developers of new technologies ( Story et al., 1998) . Other sources of diversity, including impairments of motor, cognitive, sensory, learning and intellectual abilities, are almost entirely disregarded (Vanderheiden, 1990). Consider the design of most pages on the World Wide Web, one of the most frequently encountered technologies available today. Typically they use fonts that are difficult to read, their layout does not focus attention on the more important material and access to linked information necessitates a high level of perceptual-motor skill.

 

Our objective is to provide a priori support by formulating design protocols based on heuristic principles such that each application in this project will achieve an optimal degree of universal accessibility (Rohlin, 2002). First, we will present an overview of a variety of techniques that have been shown to enhance universal accessibility as part of the design of each application. Such techniques include (1) the use of robust metaphors that are comprehended across diverse cultures, languages and age groups, (2) modular construction that permits the use of full or "lite" versions each application, and (3) intuitive, built-in training (ideally based upon an intelligent agent) that anticipates and responds to limitations by a given user. Second, we will address the issue of interactive, multimodal (3-D visual, auditory and haptic) simulations using tools from the domain of virtual reality to test mock-ups of each application prior to its dissemination. An iterative design-programming-simulated testing process ensues until a satisfactory level of usability is achieved.