Theses and Dissertations
2003
ABSTRACTS
Departmento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented
to the Departmento de Informática,
Pontifcia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2003.
They are all available in print format and, according to the authors'
preference, some of them are freely available to download, while others are
freely available to download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 31/AUGUST/2005
[03_PhD_alvim]
Adriana Cesário de Faria ALVIM. Uma heurística híbrida de melhoria para o
problema de bin packing e sua aplicação ao problema de escalonamento de
tarefas. [Title in English: A hybrid improvement heuristic for the bin packing
problem and its application to the problem of task scheduling] Ph.D.Thesis. Port. Presentation: 13/06/03 136 p. Advisor: Celso
Carneiro Ribeiro.
Abstract: We propose in this work a hybrid improvement procedure for the bin
packing problem. This heuristic has several components: lower and upper bounds;
reductions; construction of initial solutions by reference to the dual problem;
heuristics for load redistribution based on dominance, differencing, and
unbalancing; and tabu search. We also investigate the application of this hybrid
heuristic to the problem of task scheduling on identical parallel processors.
Computational results on hundreds of benchmark test problems are represented.
[03_MSc_pigatti]
Alexandre Altoé PIGATTI. Modelos e algoritmos para o problema de alocação
generalizada (PAG) e aplicações. [Title in English: Models and algorithms for
the generalized assignment problem (PAG) and applications] M.Sc.Diss Port. Presentation: 25/07/03 74 p.
Advisors: Marcus Vinicius Soledade Poggi de Aragão and Eduardo Uchoa.
Abstract: This dissertation tackles the Generalized Assignment Problem (PAG),
models and algorithms are studied and proposed. This work was motivated by a
real world application/ the Truck Loading Problem (PCC). Research was done on
approximated (metaheuristics) and exact algorithm for solving the PAG. The
approximated algorithms proposed were based on a recent idea from Fischetti and
Lodi. It uses integer programming to explore wider neighborhoods. The results
were compared to the best known, while demanding much less implementation effort
and using less CPU time. The exact algorithm proposed is a
branch-and-cut-and-price developed from the branch-and-price algorithm of
Savelsbergh. We used stabilized column generation techniques similar to the one
by Du Merle, Villeneuve, Desrosiers and Hansen, and devised experiments with
different implementations of this mechanism. The resulting algorithm proved its
efficiency by solving to optimality open instances from the literature. Finally,
experiments with the PCC turned possible the evaluation of the codes developed
on real problems.
[03_PhD_montenegro]
Anselmo Antunes MONTENEGRO. Reconstrução de cenas a partir de imagens através de
escultura do espaço por refinamento adaptativo. [Title in English:
Reconstruction of scenes from images by coarse-to-fine space carving] Ph.D.Thesis. Port. Presentation:
12/09/03 198 p. Advisors: Marcelo Gattass and Paulo Cezar Pinto Carvalho.
Abstract: The reconstruction of scenes from images has received special
attention from researchers of the areas of Computer Vision, Computer Graphics
and Geometric Modeling. As examples of application we can mention image-based
scene reconstruction, modeling of complex "as-built" objects, construction of
virtual environments and telepresense. Among the most successful methods used
for the reconstruction of scenes from images are those based on Space Carving
algorithms. These techniques reconstruct the shape of the objects of interest in
scene by determining, in a volumetric representation of the scene space, those
elements that satisfy a set of photometric constraints imposed by the input
images. Once determined, each photo-consistent element is colorized according to
the photometric information in the input images, in such a way that they
reproduce the photometric information in the input images, within some
pre-specified error tolerance. In this work, we investigate the use of rendering
techniques in space carving methods. As a result, we propose a method based on
an adaptive refinement process which works on reconstruction spaces represented
by spatial subdivisions. We claim that such method can reconstruct the objects
of interest in a more efficient way, using resources proportional to the local
characteristics of the scene, which are discovered as the reconstruction takes
place. Finally, we evaluate the quality and the efficiency of the method based
on the results obtained from a reconstruction device that works with images
captured from webcams.
[03_PhD_abelem]
Antonio Jorge Gomes ABELÉM. Difusão seletiva em inter-redes IP baseadas em redes
ópticas. [Title in English: Multicast communication in optical IP internetworks] Ph.D.Thesis. Port. Presentation: 02/04/03 130 p. Advisors: Michael
Anthony Stanton and Noemi de La Rocque Rodriguez.
Abstract: Multicast communication and recent advances in optical technology,
most specifically in Wavelength Division Multiplexing (WDM), allied with the
consolidation of IP as the dominant protocol of convergent networks, offer new
perspectives for the next generation Internet. This thesis utilises these
technologies to propose a set of adaptations, called MIRROR, to multicast
communication, specifically IP Multicast, in labelled burst-switched optical
networks. MIRROR proposes modifications to traditional IP Multicast in order to
improve its scalability as a function of the number of simultaneously active
groups, as well as marking it more appropriate for use in optically switched
networks. Basically, MIRROR includes new proposals for handling state
information about the multicast distribution tree, as
well as for the establishment of label-based multicast paths. In order to
evaluate this proposal, two approaches are followed, one based on a comparative
analysis between MIRROR and a number of other alternatives to IP Multicast
proposed in the literature, and the other based implementation of a prototype in
the simulation environment provided
by NS (Network Simulator). The comparative analysis evaluates such parameters
as: state requirement information, control overhead, packet processing
efficiency and tree cost. The prototype implementation implements a new node
structure and alters existing NS modules (OBS e MPLS), to make possible the
simulation of labelled burst-switched
optical networks in the multicast context.
[03_PhD_pessoa]
Artur Alves PESSOA. Dois problemas de otimização em grafos: transporte em redes
de dutos e busca com custos de acesso. [Title in English: Two graph optimization
problems: pipeline transportation and searching with access costs] Ph.D.Thesis. Port. Presentation: 23/05/03
119 p. Advisors: Ruy Luiz Milidiú and Eduardo Sany Laber.
Abstract: We consider two combinatorial optimization problems: the pipeline
transportation problem (PTD) and the problem of searching with different access
costs (PBC). In PTD, we are given a directed graph G = (N,A) where each arc
corresponds to a pipeline. We are also given a set of batches, each batch being
initially located at an arc or node and having a destination node. A subset of
these batches are considered as further batches. Our aim is to find a
sequence of pipeline operations leading all non-further batches to their
corresponding destination nodes. First, we show that PTD is NP-hard, even for
the case where G is acyclic. Next, we present a polynomial algorithm called BPA.
This algorithm solves PTDS, a variation of PTD, for general graphs. For acyclic
graphs, BPA also minimizes a general cost function. For the case of makespan
minimization for PTDS, we prove that there is no (formula) approximate algorithm
for any E > 0, unless P = NP, where n is the instance size. The previous result
also holds if G is both acyclic
and planar. In PBC, we are given an ordered vector with n elements and the
corresponding access costs. Our aim is to find a search strategy that minimizes
either the average cost with uniform probabilities (PBCM) or the worst case cost
(PBPC). In both cases, the best known exact algorithm requires in O(n³) time and
O(n²) space. For PBCM, we present the ratio algorithm, that requires O(n²) time
and O(n) space. This algorithm always obtains a search strategy with average
cost at most 4ln(n+1)/n, assuming the sum of all access cost to be 1.
Furthermore, we introduce approximation algorithms for both PBCM and PBPC. Both
of them give (2+ E+0(1))-approximate solutions, for any E>0, in O(n) time and
space.
[03_MSc_soaresneto]
Carlos de Salles SOARES NETO. Descrição arquitetural da provisão de QoS em
ambientes genéricos de processamento e comunicação. [Title in English:
Architectural description of the QoS provision in generic processing and
communication environments] M.Sc.Diss. Port.
Presentation: 11/08/03 113 p. Advisor: Luiz Fernando Gomes Soares.
Abstract: The increased demand for platforms with support for multimedia
applications raised the importance of mechanisms for Quality of Service
provisioning, since each media has its own processing and communication
requirements. SCM Model (Service-Composition Model) provides abstractions for
the representation and programming of QoS aspects and multicast in communication
services. According to its terminology, the QoS provisioning can be seen as a
service provider, where QoS negotiation and QoS maintenance meta services act
upon it. The QoS negotiation are the mechanisms responsible for the admission of
new user flows, while the QoS maintenance meta
service is responsible for maintaining the negotiated level of service during
the service operation. Such meta services had been previously described as
frameworks modelled in UML. The present work focuses on the architectural
description of these meta services using Wright architecture description
language (ADL), which allows the use of its analysis and formal verification
tools to infer properties. To smooth this approach, a domain-specific language
(DSL) called LindaQoS is proposed as a notation closest to the abstraction level
of the problem domain, specifically designed to define hierarchies of
negotiation and maintenance subsystems. Moreover this work presents a compiler
allowing the translation of LindaQoS specifications into architectural
descriptions (currently, using Wright) and into programming languages (JAVA in
the future).
[03_MSc_rolins]
Cláudia Silva Villa Alvarez de Noronha ROLLINS. Um framework para gerência de
contexto orientado a sistemas hipermídia adaptativos. [Title in English: Context
manager framework for supporting adaptive hypermedia systems and other
context-aware applications] M.Sc. Diss. Port.
Presentation: 09/12/03 118 p. Advisor: Luiz Fernando Gomes Soares.
Abstract: Context Aware Computing is a paradigm that allows applications to find
and take from information that surrounds them at a given time, such as location,
nearby resources, available infra-structure, user preferences, among others.
Research in this area focuses on mobile or ubiquitous computing. However, recent
research indicates that several aspects of this area not limited to mobility and
ubiquity. They can also profit from systems connected to other areas.
Comprehensive multimedia/hypermedia systems must support all the phases of a
document treatment. The treatment must encompass authoring, as well as storage
and final presentation to the reader (end user). Particularly in this type of
system, it is desirable to have tools that adapt the document, that is, tools
that apply transformations to the hyper-document, making it adequate to the
visualization. Hypermedia systems with adaptation strategies are an example of
contex-aware applications. The goal of this dissertation is to create a new
architecture for context management: one architecture developed and built as an
independent structure that may be reused by different systems, particularly by
hipermedia systems. This architecture is based on a model of context information
representation and on a recyclable framework that can be incorporated to several
implementations.
[03_MSc_rocha]*
Cristiano Braz ROCHA. Inferências semânticas na recuperação de informações para
aplicações hipermídia. [Title in English: Semantic inferences in information
retrieval for hypermedia applications] M.Sc.Diss. Port. Presentation: 24/07/03 134 p. Advisors:
Marcus Vinicius Soledade Poggi de Aragão and Daniel Schwabe.
Abstract: The information overload problem is one of the most challenging
problems being faced today. In order to solve this problem, different areas such
as Knowledge Management, Semantic Web and Hypermedia Applications Modeling have
used similar solutions that consist basically of semantically structuring the
information so it can better be accessed. This dissertation proposes an
infra-structure based on classic algorithms and techniques of Artificial
Intelligence that utilizes the availability of domain specific models to enable
the applications where they are defined
to make inferences about these particular domains. These inferences enable the
creation of new functionalities in these applications. Four new functionalities
were proposed and implemented, the most important being a semantic search. The
new functionalities presented were successfully tested in two existing
applications: the website of the Computer Science Department of PUC-Rio and the
Portinari Knowledge Portal that presents all the work of the
famous brazilian painter Candido Portinari.
[03_PhD_cerqueira]
Cristina Ururahy da Fontoura CERQUEIRA. Um modelo de programação multilinguagem
para aplicações geograficamente distribuídas. [Title in English: A multilanguage
programming model for wide-area distributed computing] Ph.D.Thesis. Port. Presentation:
12/09/03 121 p. Advisor: Noemi de La Rocque Rodriguez.
Abstract: In this work we propose the use of ALua, an event-driven communication
mechanism for coordinating and developing distributed parallel applications,
based on the interpreted language Lua. ALua adopts a multilanguage programming
model for distributed parallel applications, acting as a gluing element among
precompiled program
parts running on different machines. New developments in parallel programming,
such as Grid computing, and current interest in wide-area distributed computing
demand new levels of flexibility, such as the use of adaptive strategies and the
ability for an user to interfere with a computation without having to stop it.
Furthermore, because of its asynchronous nature, event-driven programming
provides a suitable model for environments subject to failures and delays that
are frequent in the context of geographically distributed computing. In this
work we show that ALua
can achieve the required flexibility though mechanisms for monitoring and
adapting not only applications, but also the execution environment, and also
exploit its interpretive nature to allow the programmer to modify the behavior
of the application during its execution.
[03_MSc_orlean]
Daniel Abadi ORLEAN. Um processo unificado para engenharia de ontologias.
[Title in English: The unified process for ontology engineering] M.Sc.Diss. Port. Presentation: 21/08/03 159 p. Advisor: Carlos José Pereira de
Lucena.
Abstract: The Semantic Web is now a reality. Several projects all around the
world are already using tools and technologies developed to support the second
generation of the Web to provide machine-processable content for software
agents, Web services and applications. However, computers can not agree on a
consensual language by themselves. Ontologies can be used as a way to provide
this shared conceptualization, making possible
the desired communication among organizations, people and applications. Several
proposals have been already presented regarding ontology engineering - many
supported by academic and industrial case studies. However, none of them
encompasses all the requirements identified for an ontology construction
project. This work describes the unification of different features extracted
from those methodologies to build a process framework named KUP - the Knowledge
Unified Process. This unified process is based on several industry best
practices and on a well accepted ontology methodology evaluation framework. Two
case studies were developed so as to support and validate this process
framework. The first was the development of a semantic Web solution for security
information knowledge management and the second one was the integration of a
skill management tool to a learning management system, through ontologies.
[03_MSc_vasconcelos]
Davi Romero de VASCONCELOS. Análise de estratégias utilizando verificação formal
de modelos. [Title in English: Analysis of strategies using model checking] M.Sc.Diss. Port. Presentation: 25/02/03 136 p. Advisor: Edward
Hermann Haeusler.
Abstract: In formal methods, one of the approaches that have been successful
lately is Model Checking, which consists in a technique to achieve automatic
verification about a system behavior. Nowadays, the model checking is very
frequently employed in Computer Science to formal verification of software and
hardware, but it is not used in
other knowledge fields, such as Mathematics and Economics. The purpose of this
research is to apply model checking in economics problems, using Game Theory.
Some inadequacies have been observed. Therefore, it is necessary to create new
definitions: a generic and qualitative definition of game that uses one logic
language called Game Analysis
Logic (GAL); a new language to describe game called RollGame (Romero + All
Game); a translation from RollGame to a language of model specification; a
translation from game definition to Kripke structure. It is also been observed
that the use of model checking makes it possible to analyze players' strategies.
One tool, called StratAn-RollGame (Strategy Analyzed using RollGame), makes the
translation from RollGame to model Checking automatic. Thus, the
present research has demonstrated that is possible indeed to use model checking
in other knowledge fields.
[03_PhD_saade]
Débora Christina Muchaluat SAADE. Relações em linguagens de autoria hipermídia:
aumentando reúso e expressividade. [Title in English: Relations in hypermedia
authoring languages: improving reuse and expressiveness] Ph.D.Thesis. Port. Presentation: 28/03/03 215
p. Advisor: Luiz Fernando Gomes Soares.
Abstract: This work is related to hypermedia authoring and execution
environments, and its main focus is declarative document authoring. Starting
from studies about architectural description languages (ADL), which are used for
specifying software system architectures, this thesis identified facilities
found in ADLs that could be applied to
the hypermedia domain, with advantages. Aiming at improving the expressiveness
and reuse in the specification of relations in hypermedia authoring languages,
this work introduced the concept of hypermedia connector, which has a role
similar to ADL connectors, that is, representing relations among components of a
document. Besides connectors, this work also introduced the concept of
hypermedia composite template, which has a role similar to architectural
styles in ADLs, that is, representing generic structures of nodes and links that
can be reused in several distinct documents. As a validation of the proposed
concepts, the 2.0 version of the NCL - Nested Context Language - hypermedia
authoring language, based on the NCM - Nested Context Model - conceptual model,
was developed and
integrated to the HyperProp hypermedia system, incorporating the new facilities.
The NCL 2.0 language was developed using a modular structure, following the
principles adopted by the W3C - World-Wide Web Consortium. Thus, its modules for
the specification of connectors and templates, respectively called XConnector
and XTemplate, can be
incorporated to other existent languages, such as XLink, XHTML and SMIL, used
for Web document authoring. This thesis also proposes extensions to these
languages, exemplified by the incorporation of XConnector and XTemplate
facilities into the XLink standard.
[03_MSc_souza]
Edinalda Maria de SOUZA. Um estudo sobre um algoritmo para visualização de
terrenos. [Title in English: A study on terrain visualization algorithm] M.Sc.Diss. Port. Presentation: 04/07/03 78 p. Advisor: Marcelo
Gattass.
Abstract: Algorithms for the interactive visualization of terrains are very
complex and, at the same time, of great importance to many applications, such as
games and activity-planning over terrains. Due to such complexity and
importance, in the past decade this subject has received great attention by
researchers on Computer Graphics. As
a consequence, a number of strategies have been developed. Among the most
successful strategies, one can highlight recent works by Lindstrom and Pascucci.
The algorithm proposed by these authors has various implementations available in
the Internet and deserves to be reevaluated. The present work makes such
reevaluation by means of an
independent implementation developed by the author and tested over a base or
real terrains. With the purpose of making this analysis more complete and to
support some conclusions, comparative results with other algorithms in the area
are also presented.
[03_MSc_melo]*
Fábio Cunha Lobo de MELO. Um método para a implementação de SMAs baseado em
componentes. [Title in English: A component-based
method to implement MASs] M.Sc.Diss. Port. Presentation: 20/08/03 108 p. Advisors: Carlos
José Pereira de Lucena and Renato Fontoura de Gusmão Cerqueira.
Abstract: In the past few years, the Multi-Agents Systems (MAS) area has
presented an accelerated growth. New techniques and tools are constantly being
proposed and the number of specialists dedicated to this subject is increasing.
Many methodologies have been published to support the development of multi-agent
systems. However, most of them concentrate only on the system analysis phase.
This work proposes a method to implement MASs using software components. During
the analysis and design phases, the ANote language was used. It contains seven
diagrams that model different aspects of a MAS and a proper notation for
describing agent and different views of the system. An agent implementation
model based on components is proposed and the mappings from the MAS elements to
the system implementation are described. To validate the model, a Case Study is
presented using the concepts described in this proposal. The Case Study consists
of a virtual marketplace where agents are responsible for buying and selling
products. The implementation uses the CORBA Component Model (CCM) and a language
for agent communication called FIPA-ACL.
[03_MSc_dias]*
Fábio Meira de Oliveira DIAS. Execução de workflow em ambientes com desconexão.
[Title in English: Workflow execution in disconnected environments] M.Sc.Diss. Port. Presentation: 02/02/03 60 p. Advisor: Marco Antonio Casanova.
Abstract: Workflow management systems are frequently used for
modeling,monitoring and controlling the coordinated execution of activities
performed by workgroups in a variety of contexts. With the widespread use of
portable computers and their growing computational power, conventional systems
have often proved to be overly restrictive,
effectively limiting the level of autonomy of the users involved. The primary
goal of this work is to identify and analyse different flexibilization
techniques and mechanisms that can be employed in a workflow management system
aimed at supporting disconnected operation. The main challenge is to provide a
satisfactory degree of independence
among individuals in cooperating teams who share a common goal and work in
disconnected environments. In order to test the viability of the ideas discussed
in this dissertation, a system was built whose design met the requirements
presented in the text and which allows the exploration of specific features of
different kinds of workflow so as
to enhance execution flexibility, without compromising the predefined structure.
[03_PhD_ayres]
Fausto Veras Magalhães AYRES. QEEF – uma máquina de execução de consultas
extensível. [Title in English: QEEF – an extensible query execution engine] Ph.D.Thesis Port.
Presentation: 19/12/03 125 p. Advisor: Rubens Nascimento Melo.
Abstract: Querying processing in traditional Database Management Systems (DBMS)
has been extensively studied in the literature and adopted in industry. Such
success is, in part, due to the performance of their Query Execution Engines
(QEE) for supporting the traditional query execution model. The advent of new
query scenarios, mainly due to the web computational model, has motivate the
research on new execution models such as: adaptive and continuous, and on
semistructured data models, such as XML, both not natively supported by
traditional query engines. This thesis proposes the development of an extensible
QEE adapted to the new execution and data models. Achieving this goal, we use a
software design approach based on framework technique to produce the Query
Execution Engine Framework (QEEF). Moreover, we address the question of the
orthogonality between execution and data models, witch allows for executing
query execution plans (QEP) with fragments in different models. The
extensibility of our solution is specified by in a QEP by an execution meta-
model named QUEM (QUery Execution Meta-model) used to express different models
in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF
producing a final QEP to be evaluated by the running QEE. The QEEF is
instantiated for different execution and data models as part of the validation
of this proposal.
[03_PhD_lima]
Fernanda LIMA. Modelagem semântica de aplicações na WWW. [Title in English:
Semantic modeling design of Web application] Ph.D.Thesis Port.
Presentation: 28/03/03 128 p. Advisor: Daniel Schwabe.
Abstract: In this thesis we present a method for the design and implementation
of Web applications for the Semantic Web. Based on the "Object Oriented
Hypermedia Design Method" approach, we used ontology concepts to define an
application conceptual model, extending the expressive power of the original
method. The navigational models
definitions use a query language capable of querying both schema and instances,
enabling the specification of flexible access structures. Additionally, we
propose the use of faceted access structures to improve the selection of
navigational objects organized by multiple criteria. Finally, we present an
implementation architecture that allows the direct use of the application
specifications when deriving a final application implementation.
[03_MSc_ferreira]
Francisco Eduardo dos Reis FERREIRA. Desenvolvimento de aplicações baseadas em
serviços na Web semântica. [Title in English: Developing service-based
applications on the semantic web] M.Sc.Diss. Port. Presentation: 26/03/03 103 p.
Advisors: Carlos José Pereira de Lucena and Daniel Schwabe.
Abstract: This work address issues on the development of service-oriented
applications on the semantic web. It presents a platform conceived to support
information sharing among different services accessed by different devices and
hardware infrastructure. Using open standards and a totally distributed
approach, Everyware allows users to publish their services without need of
configuration and administration. Services can also be semantically tagged and
integrated to the Semantic Web, so they can be easily processed by machines.
Everyware platform offers a standard architecture to application development and
an API to the development of coordination mechanisms. It integrates most
concerns that are present in service-based applications.
[03_MSc_carvalho]
Gustavo Robichez de CARVALHO. Uma arquitetura para a coordenação e a composição
de artefatos de software. [Title in English: Architecture for
coordination and composition of software artifacts] M.Sc.Diss. Port. Presentation: 22/08/03 88 p.
Advisors: Carlos José Pereira de Lucena and Arndt von Staa.
Abstract: Component Based Software Engineering is an approach for reusing
software artifacts when developing applications. In order to develop solutions
using this approach, it is necessary to compose software using components that
have already been developed. After putting those pieces together, we need to
coordinate the interdependencies established among those compositions to fulfill
the requirements, needed to resolve a problem. This dissertation proposes a
software architecture that separates and structures the concepts of
coordination, composition and software components in different architectural
layers. Using this approach, we expect that specific modifications in layer
constructions have the minimum impact on the others layers. ACCA (Architecture
for Coordination and Composition of software Artifacts) must be understood as a
conceptual structure that is used to organize component based software
development. It also presents a composition framework, the reification process
for ACCA and a software development process organized using this approach.
[03_PhD_rosseti]
Isabel Cristina Mello ROSSETI. Estratégias seqüenciais e paralelas de GRASP com
reconexão por caminhos para o problema de síntese de redes a 2-caminhos.
[Title in English: Sequential and parallel strategies of GRASP with
path-relinking for the 2-path network design problem] Ph.D.Thesis. Port. Presentation: 24/07/03 120 p. Advisor: Celso Carneiro
Ribeiro.
Abstract: Let G = (V,E) be a connected undirected graph, where V is the set of
nodes and E denotes the set of edges. A 2-path between nodes (s,t) is a sequence
of at most two edges connecting them. Given a non-negative weight function
associated with edges of G and a set D of origin-destination pairs of nodes, the
2-path network design problem (2PNDP) consists in finding a minimum weighted
subset of edges containing a 2-path between the extremities of every
origin-destination pair in D. Applications can be found in the design of
communication networks, in which paths with few edges are sought to enforce high
reliability and small delays. The GRASP metaheuristic is a multistart process,
in which each iteration consists of two phases: construction and local search.
The best solution found after a fixed number of iterations is returned.
Path-relinking is applied as an attempt to improve the solutions
found at the end of each GRASP iteration. Parallel implementations of
metaheuristics are very robust. Typical parallelizations of GRASP correspond to
multiple-walk independent-thread strategies, based on the balanced distribution
of the iterations over the processors. In the case of multiple-walk
cooperative-thread strategies, the processors exchange and share information
collected along the trajectories that they investigate. In this thesis,
sequential and parallel heuristics are developed for 2PNDP. Variants and
combinations of GRASP with path-relinking are analyzed by comparing the results
of the proposed algorithms with those obtained by others algorithms described in
the literature. Parallel GRASP with path-relinking heuristics are compared to
investigate the influence of the cooperation among processors in terms of
solution quality and processing time. We also explore different strategies to
optimize the parallel implementations, to make better use of the computational
resources and to reduce communication and memory conflicts.
[03_MSc_imperial]
Juliana Carpes Imperial. Técnicas para o uso de cálculo de Hoare em PCC. [Title
in English: Techniques for the use of Hoare logic in proof-carrying code]
M.Sc.Diss. Port. Presentation: 14/08/03 137 p. Advisor: Edward Hermann Haeusler.
Abstract: Nowadays most computer programs
are obtained from the WEB. Since their source is usually unknown, it is
necessary to be sure that the code of the program behaves as expected. The ideal
solution would be verify the code against a specification of safety policies.
However, this can take too much time. Another approach is making the code itself
prove that it is safe. The concept of proof-carrying code (PCC) is based on this
idea: a program carries a proof of its conformity with certain safety policies.
That is, it carries a proof concerning properties related to the code itself.
Therefore, the same formal methods employed in formal verification of programs
can be used in this technology. Due to this fact, in this work it is studied how
Hoare logic applied to source codes written in an imperative programming
language, which is a formal method to verify programs, can be useful to PCC.
Consequently, methods are researched to generate proofs of program correctness
using the method explained, so that it can be possible to generate PCC safety
programs with Hoare logic.
[03_MSc_rezende]
Juliana Lucas de REZENDE. Aplicando técnicas de conversação para a facilitação
de debates no ambiente AulaNet. [Title in English: Applying conversation
techniques to facilitate textual chat in the AulaNet Environment] M.Sc.Diss. Port. Presentation: 20/03/03 171 p.
Advisor: Hugo Fuks.
Abstract: It is difficult to coordinate participants simultaneously interacting
and generating ideas in a textual chat session. During several editions of ITAE
course - Information Technologies Applied to Education - the following
difficulties were perceived by the mediators: ensure participation from all
users, keep the focus of the conversation and the flow of the chat. This
research investigates the use of conversation techniques to facilitate the
dynamics of
textual chats. A tool (Mediated Chat 2.0) implementing those techniques was
developed and made available to help mediators conduct chat sessions in a course
using the AulaNet environment. Currently, the standard application used in the
AulaNet environment is the Mediated Chat 1.0, an instance of the Communication
Channels Framework
developed in the Software Engineering Laboratory at PUC-Rio in 2000. The
Mediated Chat 2.0 is an instance of the same framework. To evaluate the
developed application, chats had been carried out in ITAE 2002.2 where both
Mediated Chat 1.0 and 2.0 were used and compared. The chat sessions had been
analyzed to evaluate whether the mediators managed to successfully apply the
dynamics supported by the Mediated Chat 2.0. The Mediated Chat 2.0 will be
incorporated into the AulaNet software.
[03_MSc_duarte]
Julio Cesar DUARTE. A transformada de
Burrows-Wheeler e sua aplicação à compressão. [Title in English: The
Burrows-Wheeler Transform and its applications to compression] M.Sc.Diss. Port. Presentation: 25/03/03
81 p.
Advisor: Ruy Luiz Milidiú.
Abstract: The Burrows-Wheeler Transform, based on sorting of contexts,
transforms a sequence of characters into a new sequence easier to compress by an
algorithm that exploits long sequences of repeted characters. Combined with the
coding provided by the MoveToFront Algorithm and followed by a codification for
the generated integers, they propose a new family of compressors, that achieve
excellent compression rates with good time performances in compression and
decompression. This work examines detaildedly this transform, its variations and
some alternatives for the algorithms used together with it. As a final result,
we present a combination of strategies that produces compression rates for text
data that are better than those offered by implementations available nowadays.
[03_MSc_kuriszky]
Konstantin KURISZKY. Um estudo para o compartilhamento de objetos de aprendizado
em banco de dados multimídia. [Title in English: A study for sharing
learning objects in multimedia database] M.Sc.Diss. Port. Presentation: 21/03/03
287 p. Advisor: Rubens Nascimento Melo.
Abstract: This work presents a proposal to
utilize database technology for storing and managing learning objects in a
database federation (distributed database). The evolution of e-learning has
brought the focus over the productivity to make and to manage the content of
learning modules, which today comprises videos, audio, among other related data,
besides the text data. Instructors normally store this material without worry
about sharing. As a member of PGL – Partnership in Global Learning – a virtual
organization for research, development and dissemination of learning through new
technologies - TecBD - the PUC-Rio’s Database Laboratory is researching the use
of database approach for managing learning objects stored on interconnected
sites composing a heterogeneous distributed database environment. This work
intends: 1) using market ready DB products; 2) adopting the actual standards for
defining of learning objects; 3) considering learning objects stored on
separated and autonomous sites; 4) to develop an application (a study case) with
these learning objects. The learning object’s model establishes a structure for
composing learning objects by linking atomic elements and also linking composed
elements. Different approaches as Web Services, Java/Servlets and Web
Application Servers were considered for the geographically distributed problem.
A study case was build using the product IBM DB2 with the provided extenders for
audio, video, image, XML data and the Federated System Support. The web
browser’s explore of the stored data was build using the IBM Net.Data software.
Although not exclusive, it provided an easy way to perform this task and also
enabled an easy integration with IBM DB2 and its extenders.
[03_MSc_moreno]*
Lorenza Leão Oliveira MORENO. Códigos de prefixo de rápida decodificação.
[Title in English: ] M.Sc.Diss. Port. Presentation: 21/03/03 84 p. Advisor: Ruy
Luiz Milidiú.
Abstract: Even with the evolution of communication and storage devices, the use
of complex data structures, like video and hypermedia documents, keeps
increasing the demand for efficient data compression mechanisms. Prefix codes
are one of the most know compressors, since they are executed by some
compression methods that group different algorithms, besides presenting a good
performance when used separately. A lot of approaches have been tried to improve
the decoding speed of these codes. One major reason is that files are compressed
and update just a few times, whereas they have to be decompressed each time they
are accessed. This work presents prefix codes and their decoding techniques in
order to introduce a new coding scheme. In this scheme length-restricted codes
are used to control the space requirements of the Look-up table, an efficient
and fast prefix codes decoding method. Since restricted codewords are used, a
small loss of compression efficiency is admitted. Empirical experiments indicate
that this loss in the coded text is smaller than 11% if a character based model
is used, and the observed average decoding speed is five times faster than the
one for canonical codes. For a word based model, the average decoding speed is
3,5 times faster than a canonical decoder, but it decrease when a large number
of symbols is used. Hence, this method is very suitable for applications where a
character based model is used and extremely fast decoding is mandatory.
[03_MSc_paula]
Maíra Greco de PAULA. Projeto da interação humano-computador baseado em modelos
fundamentados na engenharia semiótica: construção de um modelo de interação.
[Title in English: Model-based design of human-computer interaction grounded on
semiotic engineering: an interaction model] M.Sc.Diss Port. Presentation: 27/03/03 87 p. Advisor: Simone Diniz Junqueira
Barbosa.
Abstract: Due to the propagation of personal computers, it is increasingly
important to build highly usable interfaces, taking into account users'
characteristics, preferences, and needs. Diverse models have been proposed to
cope with the complexity of human-computer interaction (HCI) design. However,
most of them deal at once with elements that should be addressed by distinct
models. Moreover, many of these models are based on cognitive theories, which
focus
mainly on the individual interacting with an application, without exploring the
fact that an application is the product of a rational decision-making process
carried out by a designer. This gap is dealt with by Semiotic Engineering, a
theory of HCI which views the interface as a designer-to-users message,
representing the designer's solution to what he believes are the users'
problems, needs, and preferences. In this message, he is telling users, directly
or indirectly, what he had in mind when he conceived the application. Within
Semiotic Engineering, this work extends scenarios, adapts an existing task model
and builds a model which treats interaction as conversation. Our goal is to
conceive models and representations that serve, each under a clear perspective,
as epistemic tools that support the designer's
reflection about the interactive solution being conceived. A small case study
was conducted to evaluate the quality of an interactive solution designed using
the proposed task and interaction models, in comparison with CTT, a widely used
task model.
[03_PhD_carneiro]*
Marcelo Medeiros CARNEIRO. Interfaces
assistidas para deficientes visuais utilizando dispositivos reativos e
transformadas de distância. [Title
in English: Assistive interfaces for the visual impaired using force feedback
devices and distance transforms.]
Ph.D. Thesis. Presentation: 30/04/2003 162 p. Marcelo Gattass and Luiz Velho.
Abstract: The natural evolution of user interface models that occurred in the last few decades ended up popularizing a standard model based almost exclusively on visual metaphors. This process has left visually impaired users unable to use computers and to access new technologies. Some actions have been made to revert this scenario. Most of them were based on adapting the existing models instead of creating specific solutions for the visually impaired community. The development of applications for such users requires the use of new technologies, tools and communication media. This thesis proposes the use of force feedback devices in the project and implementation of assistive user interfaces, helping blind users in simple 2D interaction tasks. By exploring the sense of touch, such devices can be used to improve the efficiency of the communication between the user and the interface. Also, this work investigates the use of distance transforms as a powerful mechanism to support many 2D interaction tasks.
[03_MSc_moreira]*
Marcos Magalhães MOREIRA. Integração semântica de sistemas de informação. [Title
in English: Semantic integration of information systems] M.Sc.Diss Port. Presentation:
08/08/03 128 p. Advisor: Marco Antonio Casanova.
Abstract: This work presents a semantic approach to information integration
based on the OWL ontology language, proposed as a standard language to
facilitate the integration of different information sources. The information
integration problem is first presented and then the use of ontologies to solve
it is addressed. Then, strategies to obtain and extract ontologies are
identified, emphasizing database systems. Alternative mappings, between data
properties and instances of the resulting ontologies are also proposed. Finally,
a case study is developed to apply and validate the strategies presented. As a
result, an integrator system architecture is proposed and the implementation of
some of its components is discussed.
[03_MSc_leal]*
Marcus Amorim LEAL. LuaTS: um espaço de tuplas reativo orientado a eventos.
[Title in English: LuaTS - a reactive event-oriented tuple space] M.Sc.Diss. Port. Presentation: 24/02/03 87 p. Advisor: Noemi de La Rocque
Rodriguez.
Abstract: The widespread use of the Internet along with the rapid growth and
acceptance of the Web as a general application platform impose new requirements
associated with the integration and coordination of autonomous and heterogeneous
software components. The specific needs of this context led to the development
of new coordination mechanisms, among which the reactive tuple space. In this
work we present LuaTS, a reactive, event oriented tuple space that supports only
asynchronous calls. The system, developed in Lua, provides functionalities that
allow programmers to extend its basic semantics and also support a more flexible
tuple search and retrieval process. We describe the implementation of LuaTS and
explore its features through different examples that include classic concurrent
and distributed programming problems. We show that the uncoupling provided by
the tuple space model, together with an event oriented programming dynamics,
simplify inter-process synchronization and yield a clear execution stream,
improving, in many cases, the development process of distributed applications.
[03_PhD_cortes]*
Mariela Inés CORTÉS. Suporte computacional a evolução de frameworks.
[Title in English: Computational support to framework evolution] Ph.D.Thesis. Port. Presentation: 17/06/03 127 p. Advisor: Carlos José Pereira de
Lucena.
Abstract: Framework development is expensive not only because of the intrinsic
difficulty related to the elicitation of domain knowledge but also because of
the lack of methods and techniques to support its evolution and interactive
development. The present thesis proposes the use of two complementary techniques
to support framework evolution:
refactoring and extension rules. The refactoring technique has been developed to
enable software re-structuring in a way to produce more readable and reusable
code. Extension rules have been proposed to change the structure of the
framework variation points by allowing the addition of new design
functionalities. Both techniques preserve the observable behavior of programs.
This property is formally verified in this work by using CCS approach to model
checking. The proposed approach has been tested by means of a tool specially
developed to support the application of the defined rules.
[03_MSc_leite]
Matheus Costa LEITE. Um modelo de computação para circuitos de objetos.
[Title in English: A model of computation for object circuits] M.Sc.Diss. Port. Presentation: 11/04/03 68 p. Advisor: Carlos José Pereira de
Lucena.
Abstract: Object Oriented Programming is a mature, well established software
modeling technique. Nevertheless, the importance of its role has the same
magnitude as the consensus in respect to its weakness and limitations. OO is not
a panacea, and, should it fail, alternatives must be found - some hybrid, while
others entirely new. In this work, we argue that the parallel between OO and
electric circuits is an interesting hybrid solution, for some of the basic
features found in such circuits are the same as the ones sought after as the
Holy Grail of Software Engineering - concurrency, modularity, robustness,
scalability, etc. - and that are not always achieved only with the traditional
OO approach. Hence, our proposal is the establishment of a correlation between
electric circuits and object oriented programming. From the former, comes the
circuit: closed path where information flows and is processed. From the second,
comes the object: abstract entity that constitutes the information flowing
within the circuit. Finally, from their union, arises a new model of computation
- the object circuit - where it is supposed the benefits brought by each part
are complementary. We motivate our discussion with a collection of simple -
albeit elucidative - examples, followed by a case study in the simulation field.
In order to ratify the functioning of these circuits, an object circuit's
implementation was built on top of the Java programming language.
[03_PhD_mamanimacedo]*
Nestor Adolfo MAMANI MACEDO. Criando uma arquitetura de memória
corporativa baseada em um modelo de negócio. [Title in English: Architecture for
building a strategic corporate memory based in a business model] Ph.D.Thesis. Port.
Presentation: 03/07/03 172 p. Advisors: Julio Cesar Sampaio do
Prado Leite and Teresia Diana Lewe van Macedo-Soares.
Abstract: This work presents a proposal for building corporate
memory architecture. It has three layers: sources, middleware and
repositories. First, we create an conceptual enterprise model,
named Organizational Baseline (OB), based on theories of business
management and organized in accordance with an approach of
ontology. Second, we use multi agents system as a means to
organize our proposal of architecture. The OB is the main
component, where we store whole relevant information for the
organization; it can be extracted from legacy systems,
databases, Internet sites or human beings. The conceptual
enterprise model is based on constructs elicited from positioning
and emphasizing efficiency views of strategic analysis, together
with function analysis and total quality management.
[03_PhD_renteria]
Raúl Pierre RENTERÍA. Algoritmos para regressão por mínimos quadrados
parciais. [Title in English: Algorithms for partial least squares regression] Ph.D.Thesis. Port. Presentation: 19/03/03 80 p. Advisor:
Ruy Luiz Milidiú.
Abstract: The purpose of many problems in the machine learning field
is to model the complex relationship in a system between the input X
and output Y variables when no theoretical model is available. The
Partial Least Squares (PLS) is one linear method for this kind of
problem, for the case of many input variables when compared to the
number of samples. In this thesis we present versions of the classical
PLS algorithm designed for large data sets while keeping a good
predictive power. Among the main results we highlight PPLS
(Parallel PLS), a parallel version for the case of only one output
variable, and DPLS (Direct PLS), a fast and approximate version, for
the case of more than one output variable. On the other hand, we also
present some variants of the regression algorithm that can enhance the
predictive quality based on a non-linear formulation. We introduce LPLS
(Lifted PLS), for the case of only one dependent variable based on the
theory of kernel functions, KDPLS, a non-linear formulation for DPLS,
and MKPLS, a multi-kernel algorithm that can result in a more compact
model and a better prediction quality, thanks to the use of several
kernels for the model building.
[03_PhD_rodrigues]
Rogério Ferreira RODRIGUES. Formação e controle de apresentações
hipermídia com mecanismos de adaptação temporal. [Title in English: Formatting
and controlling hypermedia presentations with temporal adaptation mechanisms] Ph.D.Thesis. Port.
Presentation: 27/03/03 156 p. Advisor: Luiz Fernando Gomes Soares.
Abstract: The development of hypermedia/multimedia systems requires
the implementation of an element responsible for receiving the
specification of a hyperdocument (structure, relations and presentation
descriptions) and controlling the presentation. This execution control
module is commonly known as hypermedia formatter. The goal of this
thesis is to discuss the aspects related to the design and
implementation of hypermedia formatters, proposing a hyperdocument
execution model and a framework to be reused in the implementation of
hypermedia systems. In particular, this work deals with document
adaptation issues, both before document presentation and on-the-fly,
prefetching mechanisms, integration with presentation tools, and
integration with the infrastructure (in special, operating system and
network). The proposal presented follows an object-oriented
specification approach, defining an architecture for temporal and
spatial synchronization in hyperdocument presentations. The proposal
can also be used in other application domains that need real-time
synchronization.
[03_MSc_maffra]
Sérgio Alvares Rodrigues de Souza MAFFRA. Propagação de som em
ambientes acústicos virtuais bidimensionais. [Title in English: Propagation of
sound in two-dimensional virtual acoustic environments] M.Sc.Diss. Port.
Presentation: 28/04/03 96 p. Advisors: Marcelo Gattass and Luiz
Henrique de Figueiredo.
Abstract: For a long time, computational simulation of acoustic
phenomena has been used mainly in the design and study of the acoustic
properties of concert and lecture halls. Recently, however, there is a
growing interest in the use of such simulations in virtual environments
in order to enhance users' immersion experience. Generally, we can say
that a virtual acoustic environment must be able to accomplish two
task: simulating the propagation of sound in an environment and
reproducing audio with spatial content, that is, in a way that it
allows the recognition of the direction of sound propagation. These
tasks are the topic of the present dissertation. We begin with a
revision of the most common algorithms for the simulation of sound
propagation and, briefly, of the reproduction of audio with spatial
content. We then present the implementation of a virtual acoustic
environment, based on beam tracing algorithms, which simulates the
propagation of sound waves in two-dimensional environments. As most
of the computation is made in a pre-processing stage, the virtual
acoustic environment implemented is appropriate only for spatially
fixed sound sources. The propagation paths computed are made of
specular reflections and of diffractions.
[03_PhD_cruz]
Sylvia de Oliveira e CRUZ. Identificando objetos através de pronomes. [Title in
English: Identifying objets through pronouns] Ph.D.Thesis. Port. Presentation: 31/03/03 125 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: The need to know the object identification to request it's
services is a restriction in the construction of object oriented
system. To reduce this restriction we have proposed a form of generic
reference to an object named pronoum. Pronouns allow the programmer to
specify how to send a message without knowing the destination object
by name. With pronouns, the visibility of an object increases without
breaking the rule of encapsulation. With this feature decoupled server
classes may be constructed and used in different contexts. This work
define some pronouns, show its utilities through of examples and
suggest a framework to new pronouns creation.
[03_MSc_melo]
Taciana Melcop Lacerda de MELO. Estratégia multi-agente para leilões
simultâneos de bens relacionados. [Title in English: A Multi-agent system for
simultaneous and related auctions] M.Sc.Diss. Port. Presentation:
20/03/03 119 p. Advisor: Ruy Luiz Milidiú.
Abstract: This work presents a multi-agent system to trade in
simultaneous auctions of related goods. The dissertation describes the
multi-agent architecture, and also the analysis and development of
strategies for trading in simultaneous auctions, where the purchase of
combined goods is required. Some well known problems in trading were
identified in order to design the architecture, such as price
prediction, good allocation, decision making, reasoning under
uncertainty, and demand segmentation. Each agent that composes the
system is concerned with one of those trading subproblems. This makes
possible to apply different computational techniques to separately
solve the subproblems and then combine the solutions. The Trading
Agent Competition (TAC) is used to illustrate our approach. TAC
was chosen to test the developed heuristics since it presents a set
of characteristics that adequately fits the problem domain. Each
heuristic developed was tested and had its results compared to TAC
previous editions. Finally, the system shows a high performance on
very competitive scenarios tested by using the TAC server environment.