Theses and Dissertations

2007

ABSTRACTS

Departamento de Informática 
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
 

This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2007.  They are all available in  print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*). 

For any requests, questions, or suggestions, please contact:
Rosane Castilho bib-di@inf.puc-rio.br

Last update: 10/MARCH/2008
 

INDEX


[07_MSc_correia]
Adolfo Guilherme Silva CORREIA. Uma arquitetura baseada em agentes de software para a automação de processos de gerência de falhas em redes de telecomunicações. [Title in English: A software agents based architecture for the automation of fault management processes in telecommunications networks]  M.Sc. Diss. Port. Presentation: 03/04/07  114 p. Advisor: Carlos José Pereira da Lucena.

Abstract:Abstract: The last few years have been marked by a significant and worldwide growth in the demand for telecommunications services. Such scenery of network expansion and the need for coexistence and interoperability of different technologies in an economically viable way provides great challenges for the management, operation and maintenance of telecommunications networks. This work presents some of the main network management models and paradigms traditionally employed in telecommunications networks and that still count with wide adoption in the industry as of this day. Many of the presented models have been significantly influenced by concepts and techniques originated in the software engineering field. A great emphasis is particulary given to the use of network management techniques based on software agents. To this end, important concepts of software agents are presented, as well as examples of works where software agents are used in the network management domain. Finally, an architecture based on software agents used for fault management in legacy telecommunications networks, which are usually managed by centralized systems, is proposed. The main objective of this architecture is to allow the diagnosis and the correction of network faults in a way not to overload the centralized management system. To this end, the architecture uses software agents that distribute information maintained in the centralized management system to other agents of the system. In such way, it is possible for the agents responsible for executing the fault diagnosis and correction procedures to perform their activities without the necessity for direct communication with the centralized system.


[07_MSc_barros]
Alexandra Barreto Assad de BARROS. Finalizadores e ciclos em tabelas fracas. [Title in English: Implementing finilazers using weak references]  M.Sc. Diss. Port. Presentation: 13/04/07  75 p. Advisor: Roberto Ierusalimschy.

Abstract: Weak References and finalizers constitute an elegant alternative to obtain control over the interaction between the application and the garbage colector. However, in some context, finalizers are not necessary because it's possible to extend the weak reference mechanism in order to give support to finalization. In this work, we present a survey of the most common uses of these mechanisms. We also show how weak references can replace finalizers proposing a weak reference based implementation for each finalizer use. Based on this survey, we developed a finalization mechanism based on weak references for the Lua programming language. Motivated by our proposal of a better exploration of the weak reference mechanism, we developed a solution for an important problem related to cycles on weak tables, an structure created using weak references. Cyclic references between keys and values prevent the elements inside the cycle from being collected, even if they are no more reachable. This ends up bringing difficulties to the use of weak tables in certain kinds of applications. The Haskell programming languages solved this problem implementing an adaptation of a mechanism called ephemerons. Based on this fact, we modified the Lua garbage collector in order to offer support to ephemerons. As a result, we were able to solve the problem of cycles on weak tables in Lua.


[07_MSc_cerqueira]
Ana Luiza Ávila CERQUEIRA. Integração de ontologia com modelagem de processo: um método para facilitar a elicitação de requisitos.[Title in English: Ontology integration from UDI to business process modeling: an approach to facilitate requirements elicitation]  M.Sc. Diss. Port. Presentation: 16/04/07  239 p. Advisor: Julio Cesar Sampaio do Prado Leite.

Abstract: Requirements definition, where engineers interact with clients to better know the organization's activities and understand their needs, is a key process to the success of an information system. A such, requirements defined with clarity and according to the client's needs are fundamental to the development of effective information systems for the organization. This dissertation proposes a method to help requirements engineers in the task of eliciting requirements. These requirements must be in accordance with the business as to improve the quality of the organization's information systems. The proposed method integrates process modeling and business ontology. The work assumes that the integration of these perspectives provodes a solid source of organization knowledge. Using this knowledge source, requirements engineers are empowered to define requirements that better fits the organizations business. The integration of process modeling and business ontology is described in detail and exemplified by a case study.
 

[07_MSc_fialho]
André Tadeu Santos FIALHO. Uso de animações em interfaces de aplicações hipermídia. [Title in English: Use of animations in hypermedia application interfaces] M.Sc. Diss. Port. Presentation: 30/07/07 133 p. Advisor: Daniel Schwabe

Abstract: In this dissertation we introduce an approach for the authoring of animated transitions in Web applications. The transitions can be defined as navigational state changes that result in the alteration of the presented widgets and interface. These alterations are usually represented abruptly through non-animated interfaces. The use of animation allows a gradual representation of the transformations that occur, characterizing a smooth transition that reveals to the user the underlying navigation operation. The approach is divided in two steps: The interface modeling and the transition modeling. In the first step, we define the interfaces through a specification of abstract interfaces provided by the SHDM/OOHDM method. In the transition modeling, we identify the animations for each transformation and define through the animation rhetorics: The transition rhetorical structure, which defines the animation execution sequence, and the rhetoric styles, that defines the effects and duration proprieties of the animation. The approach was implemented by extending the hypermedia authoring environment, HyperDE, providing smooth transitions for the generated application prototypes. In order to allow the
execution of the animations, we have also developed a transition library using Dynamic HTML technology. A preliminary evaluation with showed greater satisfaction.
 

[07_MSc_figueiredo]
Aurélio Moraes FIGUEIREDO. Mapeamento automático de horizontes e falhas em dados sísmicos 3D baseado no algoritmo de gás neural evolutivo. M.Sc. Diss. Port. Presentation: 29/06/07  79 p. Advisor: Marcelo Gattass

Abstract: In this work we present a clusterization-based method to map seismic horizons and faults from 3D seismic data. We describe a method used to quantize an initial seismic volume using a trained instance of the Growing Neural Gas (GNG) algorithm. To accomplish this task we create a training set where each sample corresponds to an entry volume voxel, retaining its vertical neighboring information. After the training procedure, the resulting graph is used to create a quantized version of the original volume. In this quantized volume both horizons and faults are more evidenced in the data, and we present a method that uses the created volume to map seismic horizons, even when they are completely disconnected by seismic faults. We also present another method that uses the quantized version of the volume to map the seismic faults. The horizon mapping procedure, tested in different volume date, yields good results. The preliminary results presented for the fault mapping procedure also yield good results, but needs further testing.


[07_MSc_vinhosa]
Bernardo Arraes VINHOSA. Uma abordagem para a evolução transparente em repositórios de medição de software: o sistema Clairvoyant. [Title in English: An approach for transparent evolution in software measurement repositories: the Clairvoyant system
]  M.Sc. Diss. Port. Presentation: 23/03/07  109 p. Advisor: Arndt von Staa.

Abstract: Abstract: The Clairvoyant system is a software measurement repository prototype which stands out for allowing transparent evolution in its measurement model. This means that changes can be made to its measurement model without revealing the underlying storage structure which makes possible this evolution. This is an important concern due to the fact that the information needs to which the measurements respond constantly evolve. The Clairvoyant system was designed based on a measurement meta-model and a measurement query model to make it possible to transparently evolve its measurement model. This work explains these models and studies their influence on the repository's operational macro-processes (measurement model maintenance, measurement data importing, measurement data querying and measurement data exporting).


[07_PhD_laufer]
Carlos Cesar LAUFER. Contract oriented web services model (COWS) - um modelo baseado em contratos para suporte a processos de negócios na web. [Title in English: Contract Oriented Web Services Model (COWS) – a semantic contract support for e-business processes]  Ph.D. Thesis. Port. Presentation: 02/04/07 154 p. Advisor: Daniel Schwabe.

Abstract: Business processes are established via relationships between partners with a common goal. These relationships are specified in contracts, which could be explicit or implicit, oral or written, and so on. When a person searches for a business partner, she is looking for a partner that can fulfill a relationship specified in a contract. To support such processes in the Internet (and in the Web) it is necessary to characterize all of its aspects, such as agents, contracts, roles, relationships, interactions between partners, policies, etc. This dissertation presents the Contract Oriented Web Services Model (COWS) - a model for an appropriate environment for E-Business dialogues, implemented using Web Services. COWS is based on well-defined contracts agreed upon by all concerned parties and incorporates various levels of applicable policies. These policies can be related to payment methods, quality of services (QoS), privacy policies, rights, products return, trust, etc. Contracts may refer to other contracts and are valid whitin forae, which have default global policies. A prototype web environment supporting COWS has been implemented to test the concepts that extend the discovery process. All COWS models have been specified an ontologies, using Flora-2.


[07_MSc_batista]
Carlos Freud Alves BATISTA. Métricas de segurança de software
. [Title in English: Software security metrics]  M.Sc. Diss. Port. Presentation: 16/04/07  102 p. Advisor: Arndt von Staa.

Abstract: Today's growing dependency on information technology (IT) makes software security a key element of IT services. In recent years public and private institutions raised the investment on information security, however the number of attacks is growing faster than our power to face them, putting at risk intellectual property, customer's confidence and businesses that rely on IT services. Experts say that most information security incidents occur due to the vulnerabilities that exist in software systems in first place. Security metrics are essential to assess software dependability with respect to security, and also to understand and manage impacts of security initiatives in organizations. However, security metrics are shrouded in mystery and very hard to implement. This work intends to show that there are no adequate metrics capable of indicating the security level that a software will achieve. Hence, we need other practices to assess the security of software while developing it and before deploying it.


[07_MSc_sousa]
Daniel Xavier de SOUSA. Estratégias de balanceamento de carga para avaliação paralela do BLAST com bases de dados replicadas e fragmentos primários. [Title in English: Workload balancing strategies for parallel BLAST evaluation on replicated databases and primary fragments]
 M.Sc. Diss. Port. Presentation: 27/07/07  84 p.  Advisor: Sérgio Lifschitz.

Abstract: A fundamental task in the area of computational biology is the searh for relevant information within the large amount of available data. Among others, it is important to run tools such as BLAST - Basic Local Alignment Search Tool - efficiently, which enables the comparison of biological sequences and discovery of homologies and other related information. However, the execution cost of BLAST is highly dependent on the database size, which has considerably increased. The evaluation of BLAST in distributed and parallel environments like PC clusters has been largely investigated in order to obtain better performances. This work reports a replicated allocation of the (sequences) database where each copy is also physically fragmented, with some fragments assigned as primary. This away we show that it is possible to execute BLAST with some nice characteristics of both replicated and fragmented conventional strategies, like flexibility and I/O parallelism. We propose two dynamic workload balancing strategies associated with this data allocation. We have adopted a non-intrusive approach, i.e., the BLAST code remains unchanged. These methods are implemented and practical results show that we achieve not only a balanced workload but also very good performances.


[07_MSc_santos]
Daniele Reis Gonzaga SANTOS. Suporte ao registro e uso de decisões de projeto de aplicações para a Web. [Title in English: Support for recording and using design rationale for Web application design] M.Sc. Diss. Port. Presentation: 13/04/07  115 p. Advisor: Daniel Schwabe.

Abstract: Every designer follows some line of reasoning, and makes several decisions when designing an artifact, which is the final result of this decision process. This design reasoning and decision structure, commonly called Design Rationale, is rarely captured and recorded. The reasons for this seem to be due to the lack of appropriate tools, which should capture the Design Rationale in an unobtrusive way, allowing the designer to focus on the design itself. Such tools should also allow reusing previous design solutions, helping the designer/developer  to improve the quality of the solution. The goal of this dissertation is to provide means to capture, record and use Design rationale within a prototyping environment for hypermedia applications. As result, we present the HyperDE+DR environment, which combines the HyperDE environment with the Kuaba approach for recording Design Rationale. The HyperDE+DR environment automatically generates and records design decisions made by the designer during the development process. Questions and ideas are automatically generated and tentatively  answered, to be reviewed later by the designer by following the OOHDM and SHDM design methods that underlie the original HyperDE environment. In addition, HyperDE+DR supports Design Rationale use, by allowing integration of  previous design rationales into a design being developed. This allows improving the completeness and consistency of the resulting design, also lowering development costs.


[07_MSc_carvalho]
Darlinton Barbosa Feres CARVALHO. Um framework para construção de vocabulário e sua aplicação ao problema de seqüenciamento de carros. [Title in English: A framework for vocabulary building and its application to the car sequencing problem] M.Sc. Diss. Port. Presentation: 19/03/07  100 p. Advisors: Carlos José Pereira de Lucena e Celso da Cruz Carneiro Ribeiro.

Abstract: Vocabulary building is a heuristic for solving combinatorial optimzation prolems, based on the identification of solution fragments which are common to good solutions and on their combination to intensify the search on promising regions of the solution space. This technique can be vastly applied on problem solving. The technology of framework is an efficient strategy to facilitate the implementation and comparison of same domain algorithms. The objective of this work is to develop a framework for the implementation of heuristics based on vocabulary building. Its development was based on a wide bibliographic revision about the technique and good software engineering practices, like oriented objects frameworks and design patters. We generated applications of the framework to solve the car sequencing problem, which is a combinatorial problem proposed by real requirements of the industry.


[07_PhD_vasconcelos]
Davi Romero de VASCONCELOS. Lógica modal de primeira-ordem para raciocinar sobre jogos. [Title in English: First-order modal logic for reasoning about games] Ph.D. Thesis. Port. Presentation: 12/04/07  241 p. Advisor: Edward Hermann Haeusler.

Abstract: Games are abstract models of decision-making in which decision-makers (players) interact in a shared environment to accomplish their goals. Several models have been proposed to analyze a wide variety of applications  in many disciplines such as mathematics, computer science and even political and social sciences among others. In this work, we focus on Game Theory and Game Logics. We present a first-order modal logic based on CTL, namely Game Analysis Logic (GAL), to model and reason about games. The standard models of Game Theory  (strategic games, extensive games and coalition games) as well as their solution concepts (Nash equilibrium, subgame perfect equilibrium and core), respectively, are express as models of GAL and formulas of GAL. Moreover, we study the alternatives of De Re and De Dicto quantification in the context of extensive games. We also show that two of the most representative game logics, namely Alternating-time Temporal Logic (ATL) and Coalitional Game Logic (CGL), are fragments of GAL. We also characterize a class of multi-agent systems, which is based on the architecture Belief-Desire-Intention (BDI), for which there is a somehow equivalent class of games and vice-versa. As a consequence, criteria of rationality for agents can be directly applied to players vice-versa. Game analysis formal tools can be applied to MAS as well. From a practical point of view, we provide and develop a model-checker for GAL. In addition, we perform case studies using our prototype.


[07_MSc_nakamura]
Fábio Issao NAKAMURA. Animação interativa de fluido baseada em partículas pelo método SPH. [Title in English: Fluid animation based on particles systems] M.Sc. Diss. Port. Presentation: 30/03/07  64 p. Advisor: Waldemar Celes Filho.

Abstract: This work investigates the use of particle-based system for fluid animation. Based on proposals presented by Muller et al., the goal of this dissertation is to investigate and fully understand the use of a Langrangian method known as Smoothed Particle Hydrodynamics (SPH) for fluid simulations. A library has been implemented in order to validate the method for fluid animation at interactive rate. To demonstrate the method effectiveness and efficiency, the resulting library allows the instantiation of different configurations, including the treatment of fluid-obstacle collisions, interaction between two distinct fluids, and fluid-user interaction.


[07_MSc_augusto]
Fernanda Duran de Moura AUGUSTO. Um mecanismo de governança para sistemas multi-agentes abertos baseado em testemunhos. [Title in English: A governance mechanism for open multi-agent systems based on testimonies] M.Sc. Diss. Port. Presentation: 04/04/07  96 p. Advisors: Carlos José Pereira de Lucena e Viviane Torres da Silva.

Abstract: Governance copes with the heterogeneity, autonomy and diversity of interests among different agents in a multi-agent system (MAS) by establishing a set norms. Most of the governance enforcement mechanisms usually check norm violations from the point of view of interaction protocols. However, in MAS, with the presence of heterogeneous and independently designed agents, there will be private messages, that will only be perceived by their senders and receivers, and execution of actions that will only be observed by the agents that execute them or by agents that suffers from their consequences. This work presents a governance mechanism for MAS based on testimonies. Agents can witness facts that are related to norm violations witch they are aware of. The proposed mechanism is composed by three sub-systems: reputation, judgment and sanction. This work focuses only in the judgment sub-system, witch is responsible for receiving testimonies and proving a decision, pointing out if an agent has really violated a norm. The judgment sub-system architecture and a generic judgment process will be presented. Finally, the use of this mechanism will be exemplified by a case study.


[07_MSc_mano]
Fernando Rimola da Cruz MANO. Classificação e segmentação de áudio a partir de fatores de escala
MPEG. [Title in English: Using scale factors for segmentation and classification of MPEG audio] M.Sc. Diss. Port. Presentation: 03/09/07 62 p. Advisor: Bruno Feijó.

Abstract: With the growth of production and storing of digital media, audio segmentation and classification are becoming increasingly important. This work is based on characteristics of the MPEG standard, considered to be the standard for digital media storage and retrieval, to propose efficient algorithms to perform these tasks. While there are many studies based on video analysis, the audio information is still not widely used in an efficient way. The suggested algorithms for both tasks are based only on the scale factors present on layer 2 MPEG audio. That allows them to read the smallest amount of information possible, significantly diminishing the amount of data manipulated during the analysis and making their performance excellent in terms of processing time. The algorithm proposed for audio classification divides audio in four possible types: silent, speech, music and applause. The segmentation algorithm finds significant changes on the audio signal that represent clues of audio segments and scene changes. Tests were made with a wide range of types of video, and both algorithms show good results.


[07_MSc_saramago]
Filipe Ancelmo SARAMAGO. Representações para modelagem computacional da discussão estruturada em rede: um estudo de caso com a ferramenta de fórum do ambiente Aulanet. [Title in English: Computational support for net structured discussion: a case study with the AulaNet environment forum tool] M.Sc. Diss. Port. Presentation: 29/03/07  76 p. Advisor: Hugo Fuks.

Abstract: This works presents an investigation regarding the computational support for net structured forum discussions. In forums, the discussion is usually structured on a tree format through hierarchically connected messages. The use of a more complex structure than a tree indicates some difficulties for the participants. It is more difficult to read, due to the lack of a linear structure and to write, due to the possibility of more flexible associations among messages. The hypothesis being investigated is that, given a specific computational support, the participants of a forum will have an adequate participation in the net structured discussion; an adequate participation happens when a participant can understand the associations among messages (reading process) or when a new message is posted with associations in the discussion (writing process). The proposed solution was the usage of a multiple reference mechanism and the visualization and the way to build the forum net structure; these tools were implemented on the Conference Service of AulaNet environment. To investigate the proposed solution some forum sessions with Computer Science graduate and pos-graduate students were conducted. In these sessions, the forum discussion structure, its visualization and the way the discussion was built were varied with the aim at analyzing the influence of the student participation. From the case study conducted it was concluded that, despite the identified difficulties, the participants managed to follow the discussion and posted messages with associations, which indicates the viability of net structured forums. Other conclusion was that the training with graphical tools to generate the net structured discussion is useful to eliminate some of the difficulties that is present on the more complex structured discussion.


[07_MSc_anjos]
Flavia Medeiros dos ANJOS. Reorganização e compressão de dados sísmicos [Title in English: Re-organization and compression of seismic data
]  M.Sc. Diss. Port. Presentation: 09/08/07 78 p. Advisor: Eduardo Sany Laber.

Abstract: Seismic data, used mainly in the petroleum industry, commonly present sizes of tens of gigabyte, and, in some cases, hundreds. This work presents propositions for manipulating these data in order to help overcoming the problems that application for seismic processing and interpretation face while dealing with file of such magnitude. The propositions are based on re-organization and compression. The knowledge of the format in which the data will be used allows us to restructure storage reducing disc-memory transference time up to 90%. Compression is used to save storage space. For data of such nature, best results in terms of compression rates come from techniques associated to information loss, being clustering one of them. In this work we present an algorithm for minimizing the cost of clutering a set of data for a pre-determined number of clusters. Seismic data have space coherence that can be used to improve their compression. Combining clustering with the use of space coherence we were able to compress sets of data with rates from 7% to 25% depending on the error associated. A new file format is proposed using re-organization and compression together.


[07_MSc_oliveira]
Glória Maria de Paula OLIVEIRA. Aplicação da análise de sistemas à definição de processos de desenvolvimento de software [Title in English: Using software engineering concepts to define software development processes]
 M.Sc. Diss. Port. Presentation: 24/08/07 92 p. Advisor: Arndt von Staa.

Abstract:
Software quality depends heavily on the quality of the process used to develop it. In order to assist the definition of an adequate process, there are several process models, maturity models and quality standards. However, creating or improving a software development process may be tough due to the large amount of available information and decisions that have to be made. Another central problem is the risk of defining an ineffective process, that is, one that increases the bureaucracy but doesn’t improve the quality of the systems developed with its support. This dissertation presents an approach for defining software development processes based on the concepts of system analysis, based on the analogy between software and process elaboration. One of the most important attributes of this approach is the focus on Risk Management, considering the identified risks in the process definition as well the possible risks during software process execution.


[07_MSc_hazan]
Guilherme Campos HAZAN. Uma especificação de máquina de registradores para Java. M.Sc. Diss. Port. Presentation: 09/04/07 72 p. Advisor: Roberto Ierusalimschy

Abstract: The Java language was created with a focus on portability. The code generated by the compiler is interpreted by a virtual machine, and not directly by the target processor, like programs written in C. This intermediate code, also known as bytecode, is the key to Java's portability. The Java Bytecodes use a stack to manipulate the instruction operands. The use of stack has their pros and cons. Among the advantages, we can cite the simplicity of implementation of the compiler and virtual machine. On the other hand, there is a speed reduction in the program's execution, due to the need to move the operands to and from the stack, and retrive results from it, increasing the number of instructions that are processed. Much study has been done indicating that register-based virtual machines can be faster than the ones based on stacks. Based on this, we decided to create a new bytecode specification, proper for a virtual machine based on registers. By doing so, we hope to obtain an increase in an application's performance.


[07_MSc_santos]
Guilherme Nascimento Pate SANTOS. Introduzindo variabilidade no desenvolvimento de sistemas multi-agentes. [Title in English: Introducing a variability into development of multi-agent systems] M.Sc. Diss. Port. Presentation: 22/03/07  110 p. Advisors: Carlos José Pereira de Lucena and Ricardo Choren Noya.

Abstract: The current agents' modeling languages aim at representing the system and theirs agents in a clear way by diagrams, which permit shows theirs goals, plans and actions. Even with all provided by the language, some systems can't be represented a correct from yet, because the current agents' modeling languages represent only a whole system and not a product line. In this way the method proposed tries to determine a mapping of the flexibility points into software agents. The flexibility points into agents are defined to theirs plans and actions. Such points can be flexible if show a variability characteristic. The variability is presented by two points of view: the variability of plans and variability of action, where the variability of plans can enable many distinct plan for an agent, in other words, accept distinct applications for each one of their plans; and the variability of actions that enable the actions are executed in distinct form that result in distinct applications. With that, we can see an abstract actions and abstract plans will be inherited by the concrete actions and concrete plans, and that in the future will define new applications. For this the method uses the diagrams, tags and documentation. The documentation is used like a guide in a plan instance and action instance. After, these agents can generate the instance of new applications by yours owner plans and actions that's determine the software product line and consequently it is possible to use the framework idea. With this approach it is possible to intoduce into agent world all the advantage of frameworks and product lines, techniques that are traditionally used in object orientation. Moreover, the approach can be used concomitantly with current' modeling languages. The benefits of the approach will be shown in more details through a case study.


[07_MSc_wagner]
Gustavo Nunes WAGNER.
Visualização interativa de modelos massivos de engenharia na indústria de petróleo com o algoritmo de Voxels distante. [Title in English: Interactive visualization of massive engineering models in the Oil & Gas industry using the FarVoxels algorithm] M.Sc. Diss. Port. Presentation: 09/04/07  88 p. Advisors: Marcelo Gattass and Alberto Barbosa Raposo.
 
Abstract: Current projects of Offshore Structures require virtual prototyping of huge CAD models. These models usually have hundreds of millions of triangles and for this reason they cannot be sent directly to current graphical boards that can render interactively only a few millions of triangles. There are many different approaches to deal with this problem including a new impostor strategy based on Voxel visualization. This strategy is promising because it deals well with level of detail, occlusion and out of core model storage. This dissertation presents a variant of the Far Voxels algorithm. This variant is implemented and tested against typical CAD models. Finally, from these tests, the dissertation present some conclusions and suggestions for future work.


[07_PhD_carvalho]
Gustavo Robichez de CARVALHO. G-Frameworks: uma abordagem para a reutilização de leis de interação em sistemas multiagentes abertos. Ph.D. Thesis. Port. Presentation: 14/05/07 186 p. Advisor: Carlos José Pereira de Lucena.

Abstract: One of the challenges of software development is to produce applications that are designed to evolve, reducing maintenance efforts. Many techniques developed to govern the interaction laws in open multi-agent systems were proposed, but the flexibility and reuse concerns of interaction laws were not systemically fulfilled by them. The technology of g-frameworks intends to guide the design and the implementation of interaction laws in open multi-agent systems, aiming to facilitate the production of interaction law governance mechanisms. The flexibility in g-frameworks is achieved by specific increments that the instances under development require, to complete and adapt the original functionalities of the g-framework. The reuse in g-frameworks is related to a common design and codification of that interaction laws that are shared by instances developed with the g-framework. The benefits of this approache might positively impact the development of software considering the costs and the necesary time to construct the family of governance mechanisms of multi-agent systems. In this thesis, some techniques to promote reuse of interaction laws were propose to fulfill this goal. One method to orient the development of g-frameworks is proposed. Experiments were developed and they are described in this thesis.


[07_PhD_rubinsztejn]
Hana Karina Salles RUBINSZTEJN. Suporte à adaptação de conteúdo sensível a contexto para dispositivos móveis em sistemas publish/subscribe
. [Title in English: Context aware content adaptation for mobile clients in publish/subscribe systems]  Ph.D. Thesis. Port. Presentation: 14/09/07 180 p. Advisor: Markus Endler.

Abstract: Services for information dissemination ("push" services) are being widely used, in particular for applications involving mobile users. These services generally serve devices with different resources and with distinc
execution contexts (wireless connectivity, energy source, etc.) making it necesary to adapt disseminated content individual and dynamically for each client. Since many content adaptations involve costly operations and demand
high processing power, these should not be executed at the mobile clients. On the other hand, it is neither efficient nor scalable to execute the adaptations for each mobile client at the server. Thus, in such services, it is common to use proxies dedicated to content adaptations (based on the context) of clients. Asynchronous communication, such as publish/subscribe, is considered the most appropriate form of communication for this type of service. On the other hand, systems for context-aware content adaptation do not support this type of communication. In this thesis we present an architecture for publish/subscribe systems with context-aware content adaptation, that uses an algorithm that optimizes the content adaptation for large sets of clients..


[07_MSc_prange]
Henrique Feliciano PRANGE. Uma avaliação empírica de um ambiente favorável para o desenvolvimento dirigido por testes. [Title in English: An empirical evaluation of an environment designed for test driven development] M.Sc. Diss. Port. Presentation: 21/03/07  117 p. Advisor: Arndt von Staa.

Abstract: Test Driven Development (TDD) is one of the eXtreme Programming's (XP) easiest practices to understand but at the same time difficult to implement. It is necessary to use complementary practices, appropriate tools, and follow carefully some rules for achieving good results. A real experiment creating an adequate environment for TDD was conducted in a small company. This study will show the results obtained. What are the advantages and disadvantages of each one of the practices? How to establish these practices in small company daily operations? What type of environments has to be built? Which tools? How much time and investment for implementing this kind of enhancement would be required? This work will present answers to these questions.


[07_MSc_cunha]
Herbet de Souza CUNHA. Uso de estratégias orientadas a metas para modelagem de requisitos de segurança. [Title in English: The use of goal-oriented strategies to security requirements modeling] M.Sc. Diss. Port. Presentation: 16/03/07  148 p. Advisor: Julio Cesar Sampaio do Prado Leite.

Abstract: Adding security requirements to software architectures after they are built is a hard work. Security concepts have to cross the whole software development cycle, from requirement engineering to deployment, passing by design, coding and test. This work presents an approach to security requirements modeling, mainly the information confidentiality and consistency, based on goal oriented strategies, bringing the security issues to the beginning of the software development cycle. It also present the results of this approach in a case study.
 

[07_MSc_venetillo]
Jerônimo Silvério VENETILLO. Simulação de partículas baseada em GPU com tratamento de colisão. [Title in English: GPU-based particle simulation with collision handling] M.Sc. Diss. Port. Presentation: 26/03/07  47 p. Advisor: Waldemar Celes Filho.

Abstract: This work presents a new proposal for the implementation of a GPU-based particle system. The simulation runs entirely on the graphic processor, thus eliminating data transfer between the CPU and the GPU. The proposed system is able to simulate particles with different diameters in confined environments, including support for inter-particle collisions, constraints, and particle-obstacle collisions. Inter-particle collision detection is accomplished by subdividing the space into a regular grid of cells. On modern graphics cards, the system is able to simulate up to one million particles at interactive rate. It is also proposed a flexible approach for modeling the obstacles that define the environment, allowing the creation of different scenes without relying on shader re-coding. The system is divided in different shaders responsible for each stage of the smulation. One fragment program is responsible to advance the particles in time. After that a vertex program builds the space subdivision structure. The following stages (collision detection and response, and constraint solving) are performed only by fragment programs using the relaxation method.


[07_MSc_guedes]
José de Souza Pinto GUEDES. Um framework para o cálculo de reputações de agentes de software baseado em testemunhos. [Title in English: A framework for the evaluation of software agents reputation based on testimonies] M.Sc. Diss. Port. Presentation: 28/03/07  117 p. Advisors: Carlos José Pereira de Lucena and Viviane Torres da Silva.

Abstract: Abstract: Reputation mechanisms are being used to increase the reliability and performance of virtual societies. Different decentralized reputaion models have been proposed based on interactions among agents. Each  system evaluates and stores the reputations of the agents with whom they have interacted and can also testify to other agents about such reputations. The main disadvantages of such approaches when applied to open large-scale multi-agent systems are the difficulty of establishing strong links between the agents, the sometimes infeasible witness search process, the fact of the reputation is being influenced by the point of view of another agent and the  fact the agents cannot be willing to testify and collaborate with possible competitive agents. In this work we propose a hybrid reputation system with centralized and decentralized characteristics to overcome such problems. The reputations are provide by the system agents themselves but also by centralized subsystems that can be easily reached by any agent and can supply reliable reputations of any agent based on testimonies about undesired agent's behavior, characterized by the violation of system norms. Such centralized subsystems are instances of the  proposed framework.


[07_PhD_imperial]
Juliana Carpes IMPERIAL. Confiança em agentes inteligentes [Title in English: Trust in intelligent agents] Ph.D. Thesis. Port. Presentation: 10/08/07  136 p. Advisor: Edward Hermann Haeusler

Abstract: Trust is a fundamental concern in large-escale open distributed sytems. It lies at the core of all interactios between the entities that have to operate in such uncertain and constantly changing environmonts. Given the complexity of the interactions, these components, and the ensuing system, are increasingly being conceptualised, desined, and built using agent-based techiques. Therefore, the presence of trust is imperative in a multi-agent system (MAS). Consequently, this work studies how to have a explicit trust model in intelligent agent, which has beliefs, desires and intentions (BDI agent). Thas is, the agent now has a fourth component called Trust. This way, a logic to include the concept of trust in an open BDI MAS is interesting, so that the different aspects of a trust model can be expressed formally and accuratelly. This is achieved by using an indexed multi-modal logic, where the possible worlds which model a multi-agent system represent which agents are in the system in a given moment. Moreover, for each one of the three original components of a BDI agent, where the components represent beliefs, desires and intentions, there is a representation of possible worlds, because these are treated as modalities. However, trust is modelled as predicate, not as a modality.


[07_MSc_salgado]
Luciana Cardoso de Castro SALGADO. CommEST - uma ferramenta de apoio ao método de avaliação de comunicabilidade [Title in English: CommEST - a communicability evaluation support tool] M.Sc. Diss. Port. Presentation: 28/03/07  221 p. Advisor: Clarisse Sieckenius de Souza.

Abstract: Abstract: With increasing competition among software producers and free distribution of software over the Internet, there is a growing concern with developing high-quality software, which can actually improve people's lives and allow for pleasant and productive use experience. To this end, one of the industry's needs is the availability of methods and techniques for evaluating use experience. Some computer tools have been developed to support the application of existing evaluation methods. Among them, some are commercial, others have been developed in universities, in non-governmental organizations, and some have even been developed by government agencies. The focus of this work is on the communicability evaluation method, an epistemic tool proposed by Semiotic Engineering, a semiotic theory of human-computer interaction. The method consists of a systematic procedure for evaluating the user' experience as they interact with systems, emphasizing the communicative aspects of the process. Although it is taught in a considerable number of graduate and undergraduate schools of Informatics in Brazil, the method is not sufficiently utilized to either generate new knowledge for human-computer interaction research, or consolidate itself as a tool for extensive professional practice. This is the consequence of difficulties in learning and teaching the method. Therefore, this work presents a computer tool to support the application of the communicability evaluation method, specifically designed to facilitate the teaching and learning of the method.


[07_PhD_lima]
Luciana dos Santos LIMA. Um protocolo para descoberta e seleção de recursos em grades móveis ad hoc. [Title in English: A protocol for resource discovery and selection in mobile ad hoc grids] Ph.D. Thesis. Port. Presentation: 15/06/07  213 p. Advisors: Markus Endler, Luiz Fernando Gomes Soares, Antônio Tadeu de Azevedo Gomes, Artur Ziviani.

Abstract: In the last few years, the use of mobile devices in computational grids has seen a growing interest. Nevertheless, a more challenging issue, the dynamic establishment of mobile grids on wireless ad hoc netwoks, has been so far only partially addressed. The first contribution of this thesis is the proposal of a software architecture for mobile grids that can be used for both infrastructured and ad hoc wireless networks. In the execution of conventional applications in grids, the responsibility to provide the service is shared among the most resourceful mobile devices. In mobile grids, it is fundamental that resource discovery and selection of resources are jointly handled. This calls for a mechanism that promotes the automatic selection of the best resource provides amongst the discovered nodes, taking into account the requirements of the application. Discovery and selection, however, have been traditionally handled separately and in most approaches the selection of resources and services requires explicit intervention by the user of the mobile grid. As a second contribution of this thesis, we propose a protocol that integrates the phases of resource discovery and automatic selection in mobile grids, allowing that computational resource provisioning is scheduled among the most resourceful nodes. Due to the dynamics of the resources needed in a mobile grid (for example, free CPU time and available memory), the protocol is based solely on demand-driven broadcasts. However, mainly in multihop ad hoc wireless networks, this strategy can inccur in overhead at the involved devices, due to the diffusion of requests and replies. A third contribution of this thesis is the development of a mechanism that allows to reduce this overhead by means of the suppression of redundant replies in the network. The mechanism has been implemented in the context of the proposal protocol, but can be applied as well to other query-based discovery protocols based on broadcasts. The experimental results obtained from executions in a testbed and through simulations show that the proposed protocol provides efficient load balancing between devices with an increasing number of requests. Moreover, it can be observed that the mechanism for suppression of replies scales well with respect to an increasing number of devices when compared to other discovery protocols in wireless ad hoc networks that are purely based on requests via broadcast.


[07_PhD_paula]
Maíra Greco de PAULA. ComunIHC-ES: ferramenta de apoio à comunicação entre profissionais de IHC e engenheiros de software [Title in English: ComunIHC-ES: an HCI tool to support the communication between HCI professionals and software engineers
] Ph.D. Thesis. Port. Presentation: 15/06/07  170 p. Advisor; Simone Diniz Junqueira Barbosa

Abstract: Developing interactive systems involves professionals from many areas of expertise, includes HCI (Human-Computer Interaction) and Software Engineering (SE), each one with specific focus and goals. HCI focuses, generally, on understanding the user' characteristics, needs and objectives, their work environment and the task they need or want to perform using the system. Based on this understanding, HCI designs interface and interaction, constantly evaluating the produced artifacts. SE, on the other hand, aims primarily at specifying, implementing and testing the
interactive system's functionalities and architecture. These two areas have a common objective: to create an interactive system that meets the needs of its users. To attain this goal, this work supposes that, throughout the development process, communication is needed between the professionals from these areas, in order to create a shared understanding about the problem and about what should be built, so that, consequently, the developed application layer will becompatible with the interaction layer, promoting the consistency of what will be presented to the end user. Thus, to support both the communication and the negotiation about interaction design between HCI and SE professionals, this research proposes a communication tool based on semiotic engineering, called ComunIHC-ES. This tool contains information about the problem domain, its users, the task involved and the usage context; a language to represent the interaction; and elements that help to explain the HCI design to software engineers. ComunIHC-SE was used in a case study involving professionals from both areas and, after analyzing ist results, indications were obtained of its usefulness in supporting both the HCI-SE communication and the software engineering' work.


[07_MSc_malcher]
Marcelo Andrade da Gama MALCHER.
Um middleware e aplicativo para apresentação colaborativa sensível a contexto em dispositivos móveis [Title in English: A middleware and an application for context-aware, collaborative presentation sharing on handhelds]  M.Sc. Diss. Port. Presentation: 24/08/07 109 p. Advisor: Markus Endler.

Abstract: The ongoing improvement of portable devices and the increasing ubiquity of wireless networks enable the development of services and applications for any-place-any-time collaboration among mobile users in many different
environments, such as at home, in public areas, in universities, in companies, among others. It is expected that the use of portable, wireless-enabled devices in classrooms improves the interaction and engagement in the learning process. This work describes a distributed application named iPH (Interactive Presenter for Handhelds) that supports the sharing and co-edition of presentations among an instructor and students of a classroom, as well as the middleware components used for the development of iPH. This system can be executed on a wide range of devices,
such as tablets, notebooks and handhelds (palmtops or smartphones), and uses the device's context information to adapt itself to improve, for example, the interaction with the user.


[07_MSc_fernandez]
Marcelo Gonella FERNANDEZ. Tratamento e compressão baseada em wavelets para dados adquiridos por sensores.
[Title in English: Treatment and wavelet-based compression for sensor data]  M.Sc. Diss. Port. Presentation: 13/09/07  87 p. Advisor: Marco Antonio Casanova.

Abstract: This dissertation introduces a strategy to develop a compression method for sensor data inspired on the JPG2000 techniques. The strategy adopted processes data streams much in the same way as signal processing. Due to the unstable nature of sensor data, noise is added to the original signal. This noise is detected and treated while the signal is cleaned and smoothed, making it easier to analyze the data stream. Less relevant signal components are removed or approximated allowing the signal to be compressed with few information loss.


[07_PhD_sayao]
Miriam SAYÃO. Verificação e validação em requisitos: processamento da linguagem natural e agentes. [Title in English: Requirements verification and validation: natural language processing and software agents] Ph.D. Thesis. Port. Presentation: 18/04/07  205 p. Advisor: Julio Cesar Sampaio do Prado Leite.

Abstract: In software development process, initial activities can involve requirements elicitacion, modeling and analysis (verification and validation). The use of natural language in the register of the requirements facilitates
the communication among stakeholders, besides offering possibilities to customers and users to validate requirements without extra knowledge. On the other hand, in the current global economy, software development for teams geographically distributed is becoming a rule. In this scenario, requirements verification and validation for medium or high complexity software can involve the treatment of hundreds or even thousand requirements. With this complexity order it is important to provide computational support for the software engineer execute quality activities. In this work we propose a strategy which combines natural language processing (NLP) techniques and software agents to support analysis activities. We have generated textual or graphical visions from groups
of related requirements; visions help completeness analysis, identification of duplicities and dependences among requirements. We use content analysis techniques to support the identification of omissions in nonfuctional requirements. Also, we propose a strategy to construct the lexicon, using NLP techniques. We use software agents to implement web services that incorporate the related strategies, and also agents to act as personal assistants for stakeholders of the software project.


[07_MSc_shaham]
Noam SHAHAM. Métodos para aceleração do ”non-local means” algoritmo de redução de ruído. [Title in English: Methods for the acceleration of non-local means noise reduction algorithm] M.Sc. Diss. Port. Presentation: 28/02/07  89 p. Advisor: Eduardo Sany Laber.

Abstract: Non Local Means is an innovative noise reduction algorithm for images.It performs remarkably better than older generation algorithms but has a performance penalty that prevents it from being used in maintream consumer application. The objective of this work is to find ways of reducing the time-complexity of the algorithm and enabling its use in main stream image processing applications such as home photography or photo printing centers.


[07_PhD_lucena_rodrigues]
Paula Salgado LUCENA RODRIGUES. Um sistema de geração de expressões faciais dinâmicas em animações faciais 3D com processamento de fala. [Title in English: A System for generating dynamic facial expressions in 3D facial animation with speech processing] Ph.D. Thesis. Port. Presentation: 20/12/07 161 p. Advisor: Bruno Feijó and Luiz Carlos Pacheco R. Velho.

Abstract: This thesis presents a system for generating dynamic facial expressions synchronized with speech, rendered using a tridimensional realistic face. Dynamic facial expressions are those temporal-based facial expressions semantically related with emotions, speech and affective inputs that can modify a facial animation
behavior. The thesis defines an emotion model for speech virtual actors, named VeeM (Virtual emotion-to-expression Model), which is based on a revision of the emotional wheel of Plutchik model. The VeeM introduces the emotional hypercube concept in the R4 canonical space to combine pure emotions and create new derived emotions. In order to validate VeeM, it has been developed an authoring and player facial animation tool, named DynaFeX (Dynamic Facial eXpression), where a speech processing is realized to allow the phenome and viseme synchronization. The tool
allows either the definition and refinement of emotions for each frame, or group of frames, as the facial animation adition using a high-level approach based on animation scripts. The tool player controls the animation presentation synchronizing the speech and emotional features with the virtual character performance. DynaFeX is built over a tridimansional polygonal mesh, compliant with MPEG-4 facial animation standard, what favors tool interoperability with other facial animation systems.


[07_MSc_espinha]
Rafael de Souza Lima ESPINHA. Uma abordagem para a avaliação de processos de desenvolvimento de software baseada em risco e conformidade. [Title in English: An approach for the evaluation of software development processes based on risk and compliance] M.Sc. Diss. Port. Presentation: 27/03/07  132 p. Advisors: Arndt von Staa and Carlos José Pereira de Lucena.

Abstract: Nowadays, one of the main requirements of a software development project is the delivery of a quality product that conforms to the expected schedule and budget and satisfies customer needs. Using the hypothesis that the quality of the developed is closely related to the processes used in its development, many organizations invest in process improvement programs, where the processes are continuously assessed and improved. In this work we propose an approach for process assessment based on risk and process compliance analysis. This approach is composed of a two-step appraisal method and a supporting tool. In the first step of the method, a quick analysis is executed to identify the most problematic areas. In the second one, a more elaborated analysis is performed only in the critical areas, reducing the costs and increasing the effectiveness of the appraisal. The tool uses a mechanism of surveys and checklists to verify the risk and the compliance of the process of the organization. A knowledge base is organized in accordance to a reference quality norm or maturity model. At ther end of an assessment, reports, tables and charts support the decision-taking, and they can be used to guide an improvement program. The approach has been used in three case studies.


[07_MSc_rodrigues]
Rafael Ferreira RODRIGUES. Ambiente declarativo para sistemas que implementem o GEM. [Title in English: Declarative environment for GEM Globally Executable MHP)-based systems] M.Sc. Diss. Port. Presentation: 28/08/07  101 p. Advisor: Luiz Fernando Gomes Soares

Abstract: The several procedural environment proposals for terrestrial Digital TV Systems led to the middleware framework recommendation known as Globally Executable MHP (GEM). This standard aims at the harmonization of such environments allowing the global execution of procedural applications but neglecting the declarative ones. In this context, this work describes the integration of the Ginga declarative environment using the API supplied by GEM and allowing the global execution of declarative contents produced for the Brazilian System of Digital TV (Sistema Brasileiro de TV Digital).


[07_MSc_pinto]
Rafael Martinelli PINTO. Modelos e algoritmos para análise de congestionamento e determinação de paradas na logística ferroviária. [Title in English: Models and algorithms for congestion analysis and stop determination in railroad logistics] M.Sc. Diss. Port. Presentation: 11/04/07 70 p. Advisors: Marcus Vinicius Soledade Poggi de Aragão.

Abstract: Planning in Railway Logistic is an activity with growing importance. This is due to the high costs of investment to increase the railway capacity. Nevertheless, planning in this context is a cumbersome task, since a precise representation is necessary to consider most relevant points in this activity. Mathematical programming is becoming one of the best ways derive precise representations and to solve them. This is due to the recent advances on algorithms and computers used in the resolution of mathematical programming problems. This dissertation presents models and algorithms for tactical and strategical railway planning what is done studying a demand planning problem (PPA). First, this problem is considered assuming that all the railway structure is defined: the network, the locomotives and wagons available, the yards for loading and unloading with their respective rates, and the forecast of demands. Next, the question of deciding the yards to stop is considered. Finally, in a third step, the effect of congestion in parts of the network is introduced to the models. This allows analyzing the variation in the travel times and its consequence in the logistic structure capacity. Models are presented for all cases of the PPA. Exact and heuristic algorithms, as well as pre-processing techniques, are described for the problem resolution. In all cases, the resulting approach allowed to solve the problems optimally or quasi-optimally in a reasonable computing time. Computational results are presented on a wide set of real world instances.


[07_MSc_sousa]
Regiane Lima de SOUSA. Desenvolvimento de aplicações sensíveis ao contexto usando sistemas multi-agentes.
[Title in English: Context-aware application development using multi-agent systems] M.Sc. Diss. Port. Presentation: 20/09/07 64 p. Advisor: Carlos José Pereira de Lucena.

Abstract: The development of context-aware applications (CAAs) is not a trivial task due to their intrinsic features, such as openness, asynchrony communication, and the lack of modular abstractions and mechanisms for the propagation of context information. On the other hand, a software development paradigm is actually considered a basic tool for the construction of any software system. In particular, the Software Engineering for Multi-Agent Systems (SEMAS) is often introduced as a promising paradigm for the development of distributed, open, and extensible applications. Software agents are elements whose execution leads to the reach of the system’s goals, through their interaction, adaptation, and autonomy properties. From the agent properties, it becomes possible to provide solutions for the development of CAAs in order to facilitate the satisfaction of the common requirements mentioned above. This work has two main purposes: (1) the development of case studies involving the use of SEMAS for three CAA-specific applications; (2) propose a framework to support the reuse of context-awareness features in the CAAs. The evaluation of the case studies and of the framework is used for the demonstration of SEMAS usability in the CAA-specific domain. Some evidences about the generality of the results are also provided, beyond the quantitative measurements based on common quality attributes, such as the modularity.


[07_MSc_novais]
Renato Lima NOVAIS. Coordenação de workflows em ambientes com suporte a dispositivos móveis. [Title in English: Workflow coordination in environments with support for mobile devices] M.Sc. Diss. Port. Presentation: 16/03/07 117 p. Advisor: Marco Antonio Casanova

Abstract: Workflow technology is heavily used to support many process within organizations. One frequently finds processes that need to be executed in places that are difficult to access or where desktop computers and reliable Internet are not available, which compilates the automated execution of these activities. However, the advance of mobile technologies made it possible to successfully automate such types of activities directly in the field. The purpose of this work is to investigate questions related to workflow management systems in environments with support for disconnected operation using mobile devices.


[07_MSc_gralhoz]
Ricardo Augusto Rodrigues GRALHOZ.
LawML: uma linguagem para a modelagem de leis de interação em sistemas multi-agentes abertos  LawML [Title in English: A language for modeling interaction laws in open multi-agent systems]  M.Sc. Diss. Port. Presentation: 31/07/07  200 p. Advisors: Carlos José Pereira de Lucena e Ivan Mathias Filho.

Abstract: The paradigm of agents appeared while aiming to satisfy the need for new abstractions for the development of complex and distributed systems. To manage with the unpredictable behavior of open multi-agent systems, governance mechanisms are used in the regulation of interactions between agents. This is due to the concurrent and asynchronous characteristics of these systems, which are formed by several agents who can act autonomically and can interact with each other to reach individual goals. In the majority of approaches, the governance rules are specified with declarative languages or new graphical representations, which can make this task costly and can make the use of these governance mechanisms difficult. This essay presents the LawML, a modeling language based on UML for the specification of rules for interactions between agents, which is aimed to facilitate the modeling task and, therefore, to facilitate the use of a specific governance mechanism based on interaction laws. A set of transformation rules is presented in addition to the language to allow the graphical interaction law models to be tranformed into the declarative language of the governance mechanism, the XMLaw
format code. To allow the model-driven development of interaction laws, it is presented the LawGenerator, a tool for the automatic transformation of the law model, based on these transformation rules. Finally, this approach is applied to a case study based on a real distributed system, the Brazilian Central Bank SELIC system, with the characteristics of an open multi-agent system.


[07_PhD_paes]
Rodrigo de Barros PAES. Governança de sistemas multi-agentes abertos com fidedignidade. Ph.D. Thesis. Port. Presentation: 02/10/07 196 p. Advisor: Carlos José Pereira de Lucena.

Abstract: Open multi-agent systems are frequently characterized by having little or no control over the behavior of the agents. The internal implementation and architecture of agents usually are inaccessible, and different teams may have developed them but with no coordination between them. Furthermore, agents may enter or leave the system at their will. A governance approach defines the interaction rules that must be obeyed by the agents. These rules allow for a greater contol and predictability of the observable system behavior. In this thesis, we propose a governance approach to deal not only with the monitoring and control of agents' behavior but also to deal with dependability concerns. The original definition of dependability is the ability to deliver service that can justifiably be trusted. A governance approach that also with dependability has as main benefit the reuse of the monitoring and enforcement present in the governance infrastructure for dependability. We present a case study in the context of an air traffic control system to illustrate our approach.


[07_MSc_santos]
Rodrigo Borges da Silva SANTOS. Sistema de controle de versões para edição cooperativa de vídeo MPEG-2. [Title in English: Version control system for cooperative MPEG-2 video editing] M.Sc. Diss. Port. Presentation: 15/03/07 110 p. Advisor: Luiz Fernando Gomes Soares and Marco Antonio Casanova.

Abstract: Technological advances in areas such as capture, storage and compression of digital video are stimulating the development of new services and systems for manipulation and management of huge amount of video data. An example of this, are the systems of management, editing and sharing of versions used by producers of audiovisual content. However, such functional requirements are not found in one system. This work describes a system that makes possible the cooperative edition of audiovisual data in MPEG-2 format, allowing the version control, visualization and manipulation of its content by segments. This collaborative system still has advantages as the division of tasks between editors, the fusion of different versions and the extraction of information of authorship from each version.
 

[07_MSc_guimaraes]
Rodrigo Laiola GUIMARÃES. Composer: um ambiente de autoria de documentos NCL para TV digital interativa. [Title in English: Composer: an authoring tool of NCL documents for interactive digital TV] M.Sc. Diss. Port. Presentation: 13/06/07  106 p. Advisor: Luiz Fernando Gomes Soares.

Abstract: With the advent of adoption of an interactive digital TV standard by the Brazilian government, the interest for the analysis of possible alternatives in several areas that compose a digital TV system has increased. In the case of Brazil, NCL is the declarative language adopted for modeling interactive applications in the Brazilian Terrestrial Digital TV System (ISDTV-T- International System for Digital TV). In that context, this work presents Composer, an authoring tool to create NCL documents for interactive digital TV. In the same way that in the HyperProp editor, in which it is based, in Composer the abstractions are defined using views that allow to simulate a
specific type of edition (structural, temporal, layout and textual). Those visions work in a synchronized way, in order to ofter an integrated authoring tool. Besides having the user and the functional interface remodeled, mainly its temporal view, problems of representation and edition of media objects, relationship problems, amongst then the interactive relationships, and live edition are treated. In summary, the proposed system tries to make easier the creation of documents for digital TV abstracting from the author all, or at least some complexity of programming in NCL using this authoring tool.


[07_MSc_velloso]
Susana Rosich Soares VELLOSO. SQLLOMining: obtenção de objetos de aprendizagem utilizando técnicas de aprendizado de máquina [Title in English: SQLLOMining: Finding Learning Objects using machine learning methods]
M.Sc. Diss. Port. Presentation: 27/07/07. 118 p. Advisor: Rubens Nascimento Melo

Abstract: Learning Objects (LOs) are pieces of instructional material like traditional texts that can be reused in the composition of more complex objects like classes or courses. There are some difficulties in the process of LO reutilization. One of them in to find pieces of documents that can be used like LOs. In this work we present a process that, in search for LOs, starts by extracting, transforming and loading a text database and then continue clustering these texts, using a machine learning methods that combines EM (Expectation-Maximization) and a Bayesian classifier. We implemented that process in a system called "SQLLOMining" that uses the SQL language and text mining methods in the search for LOs.


[07_MSc_escovedo]
Tatiana ESCOVEDO. IssueNet: um framework para avaliação colaborativa de tarefas [Title in English: IssueNet: A framework for collaborative task assessment
M.Sc. Diss. Port. Presentation: 26/07/07 120 p. Advisor: Carlos José Pereira de Lucena.

Abstract: Currently, the business market is characterized by globalization, strong competition, fast changes, increasing flow and obsolescence of information and demanding quality standards and productivity. To follow these transformations, the school also needs to evolve from the classical model to Collaborative Learning, in order to form individuals capable to communicating, working in group for the resolution of complex and interdisciplinary problems, coordinating the individual work and that of the group, and taking the best decisions. This research specifically investigates the collaborative evaluation in learning and working groups, and proposes IssueNet, a collaboration Framework for the management and collaborative evaluation of tasks. To validate the contributions brought about
by the Framework, and to investigate what other influences it may have on learning or working groups, two case-studies using two distinc IssueNet instances have been carried through. After the analysis of the case-studies and of the based on the comments of the participants, we have concluded that the Framework satisfies our expectations by making it possible the collaborative evaluation in learning or working groups.


[07_PhD_kulesza]
Uirá KULESZA. Uma abordagem orientada a aspectos para o desenvolvimento de frameworks. [Title in English: An aspect-oriented approach to framework development] Ph.D. Thesis. Port. Presentation: 25/04/07 205 p. Advisor: Carlos José Pereira de Lucena.

Abstract: This work proposes a systematic approach to framework development wich relies on the use of aspect-oriented (AO) techniques. The main goal of the approach is to improve the extensibility and configurability of object-
oriented (OO) frameworks. It is composed of: (i) a set of guidelines to design and implement frameworks using aspect-oriented programming; and (ii) a generative model which allows the automatic instantiation of the framework
and its respective OO and AO variabilities. Our guidelines propose the definition of extension join points (EJPs) in the framework code, which can be used to extend the framework basic functionality by means of extension aspects. The extension aspects are responsible for implementing optional, alternative and integration crosscutting features required by the framework users. Since such aspects can be automatically unplugged from the framework code, our approach makes it easier to customize the framework to specific needs. Three cases studies are presented to illustrate the applicability of our approach to the development of frameworks from different domains. The approach is also evaluated through both a qualitative and a quantitative study. Finally, several lessons learned and discussions resulting from the use of the approach are described.


[07_PhD_costa]
Vaston Goncalves da COSTA. Compactação de provas lógicas. [Title in English: Logic proof compactation] Ph.D. Thesis. Port. Presentation: 09/04/07  66 p. Advisor: Edward Hermann Haeusler.

Abstract: It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut-free proofs and their respective non-normal proofs. The task of automatic theorem proving is, on the other hand, usually based on the construction of normal, cut-free or only-atomic-cuts proofs, since this procedure produces less alternative choices. There are familiar tautologies such that the cut-free proof is huge while the non-cut-free is small. The aim of this work is to reduce the weight of proposicional deductions. In this sense we present two methods. The first, namely vertical method, uses the extension axioms. We present a method that generates a such extension axiom. The second, namely horizontal method, adds suitable (propositional) unifi fications modulo variable substitutions.We also present a method that generates a such unifi fication during the proving process. The proofs produced correspond in a certain way to non normal proofs (non cut-free proofs).


[07_MSc_amorim]
Vinci Pegoretti AMORIM. Uma arquitetura flexivel para servicos de replicação de bases distribuídas heterogêneas. M.Sc. Diss. Port. Presentation: 19/03/07 60 p. Advisor: Marco Antonio Casanova.

Abstract: The replication services available acquire high maturity and performance levels. However, they do not work with heterogeneous data bases. This dissertation first describes a software architecture that focus on how to provide replication services for heterogeneous bases. To obtain high scalability and to maintain simplicity, the architecture follows a multi-agent structure and adopts a domain-driven design approach. Then, the dissertation describes a reference implementation and discusses the technical decisions adopted, focusing on version control problems, consistency verification and specific business-oriented rules. The dissertation also describes utilities that facilitate the configuration and maintenance of the replication system.


[07_MSc_barroso]
Vitor Barata Ribeiro Blanco BARROSO. Geração de sombras em tempo real para modelos CAD. M.Sc. Diss. Port. Presentation: 02/04/07 86 p. Advisor: Waldemar Celes Filho

Abstract: Shadow mapping is a widely used rendering technique for shadow generation on arbitrary surfaces. However, because of the limited resolution available for sampling the scene, the algorithm presents two difficult problems to be solved: the incorrect self-shadowing of objects and the jagged appearance of shadow borders, also known as aliasing. Generating shadows for CAD (Computer-Aied Design) models presents additional challenges, due to the existence of many thin complex-silhouette objects and the high depth complexity. In this work, we present a detailed analysis of self-shadowing and aliasing by reviewing and building on works from different authors. We also propose some improvements to existing algorithms: sample alignment without vertex shaders, a generalized parameter for the LiSPSM (Light-Space Perspective Shadow Map) algorithm, and an adaptive z-partitioning scheme. Finally, we investigate the effectiveness of different algorithms when applied to CAD models, considering ease of implementation, visual quality and computational efficiency.


[07_MSc_aureliano]
Viviane Cristina Oliveira AURELIANO.
eXtreme communication-centered design: um processo ágil para o projeto da interação humano-computador. [Title in English: eXtreme Communication-Centered Design: an agile process for human-computer interaction design] M.Sc. Diss. Port. Presentation: 15/06/07 162 p. Advisor: Simone Diniz Junqueira Barbosa.

Abstract: Interactive software development can follow different kinds of processes, from specification-driven approaches (traditional methods) to prototype-driven approaches (agile methods). Due to the emphasis in documentation since its initial phases, traditional methods allow for more reflection on the software before its implementation and contribute to better maintainability. On the other hand, agile methods have reduced documentation, concentrating mainly on the implementation of the system, in order to increase the productivity in the software development process. As software becomes more interactive and accessible to a wider range of users, human-computer interaction (HCI) concerns have been gaining emphasis in the software development process. However, schedule and budget restrictions limit the application of known and accepted HCI techniques. As a consequence, and similar to what happened with software development processes, there has been a tendency to adopt simplified usability practices, such as checklists and guidelines. In order to deal with such concerns at design time, and in a way that is not so simplified, this work unites some advantages of different kinds of software development processes to define an HCI design process. This process brings together the support for reflection given by the Semiotic Engineering (SemEng) theory and the agility of interface prototype techniques, incorporating the values and practices from agile methods, more specifically of the eXtreme Programming (XP) development process.