Theses and Dissertations
2010
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2010. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 22/SEPTEMBER/2011
[10_PhD_rademaker]
Alexandre RADEMAKER.
On a proof theory for description logics. [Title
in Portuguese: Teoria da prova para lógicas de descrição].
Ph.D. Thesis. Eng. Presentation: 30/03/10. 117 p. Advisor: Edward Hermann
Haeusler.
Abstract: Description logics (DLs)
is a family of formalisms used to represent knowledge of a domain. They are
equipped with a formal logic-based semantics. Knowledge representation systems
based on description logics provide various inference capabilities that deduce
implicit knowledge from the explicitly represented knowledge. In this thesis we
investigate the Proof Theory for DLs. We introduce Sequent Calculi and Natural
Deduction for some DLs (ALC, ALCQ). Cut-elimimination and Normalization are
proved for the calculi. It is argued that those systems can improve the
extraction of computational content from DLs proofs for explanations purpose.
[10_PhD_sampaio]
Andréia Libório SAMPAIO.
Um modelo
para descrever e negociar modificações em sistemas Web. [Title
in English: A model to describe and negotiate changes in Web systems].
Ph.D. Thesis. Port. Presentation: 05/07/10. 166 p. Advisor: Clarisse Seckenius de
Souza.
Abstract: The involvement of users in system development (End User
Development) has attracted significant attention in the last ten years, given
that users can adjust the applications according to thier desires and needs.
The Web 2.0 has now made possible the development of a new culture of user
participation through social networks, where users share knowledge, experience
and abilities. The contribution of this research is to provide knowledge
and tools to foster a culture of participation among end users in the process of
software development. One of the social contexts where such participation
is particularly important is that of group systems, where changes desired by
some typically affect other users in ways that may not always be desirable for
all. Our main contribution is a model founded in Semiotic Engineering to
support describing and negotiating system evolution in group discussions. Its
main features are: i) it provides a structure for the communication
between people involved in the negotiation, and (ii) it combines three
representional systems (interface language, natural language used in annotations,
and a script language for interaction in Web pages). These languages are
combined into a hybrid specification language for end users. We
implemented feature (ii) through the tool primoTiWIM ('This is what I Mean').
We focused on the implementation and evaluation of feature (ii), sindce
expression through this combined language affects the communication quality of
the whole process that we wish to support. In order to evaluate our
proposal we carried out empirical studies in which we wish to support. In
order to evaluate our proposal we carried out empirical studies in whiche we
sought to observe the first reactions and impressions of users in face of the
primoTiWIM Tool. We concluded hat the proposed model has the potential to
promote a culture of participation in the context of the co-authorship between
user-developer and software developers.
[10_MSc_luna]
Andréia Miranda de LUNA.
Geração de interfaces RIA dirigida por ontologias. [Title
in English: Ontology driven rich Web interface generation].
M.Sc. Diss. Eng. Presentation: 06/01/10. 179 p. Advisor: Daniel Schwabe.
Abstract: In this Web 2.0 era, the browsers perform ever-richer graphical
interfaces. Today, virtually every type of application can benefit from the
ubiquity of Web browsers without compromising the user experience. Under the
Model-Driven Development paradigm, models represent more than abstraction and
documentation tools; they can also
perform as powerful specification languages. When transformation rules are
applied to these models, this can automate the repetitive task of generating
infrastructure code. This work proposes an abstract RIA interface description
language and a whole software environment that can make it possible to the
application designer to
automatically generate an executable interface from an abstract description.
Being the Semantic Hypermedia Design Method (SHDM) development environment, the
HyperDE framework became the target platform for the RIA interface code
generation. Our solution also introduces a message queue-based protocol as a way
to implement asynchronous communication between Model and View. It will make it
possible to update the interface with the partial results of a request
processing and, therefore, improve the user experience, enhancing what Ajax
technology has accomplished so far.
[10_MSc_neto]
Baldoíno Fonseca
dos Santos NETO.
JAAF: implementando agentes auto-adaptativos orientados a
serviços. [Title
in English: JAAF: implementing service-oriented self-adaptive agents].
M.Sc. Diss. Port. Presentation: 26/03/10. 106 p. Advisor: Carlos José Pereira de
Lucena.
Abstract: Service oriented multi-agent systems (SOMS) have
emerged in order to incorporate the benefits of two software engineering
disciplines: Service-oriented Architecture and Agent-oriented Software
Engineering. The first
provides loosely coupled services that can be used within different domains. The
second is based on a new software engineering paradigm that adresses the
development of Multi-agent Systems, which are composed of autonomous,
pro-active and reasoning entities, named software agents. One of the main goal
of SOMS is to help the development of service-oriented systems able to adapt
themselves on dynamic computing environments. Those systems must be able to
react at runtime to changes in their requirements, as well as to efficiently
accommodate for derivations from their expected functionality or quality of
services. In this context, this work proposes a framework (Java self-Adaptive
Agent Framework - JAAF) to implement self-adaptive agents able to autonomously
and pro-actively discover services, decide about the most appropriate one and
adapt themselves if they face a problem while executing the service. The
applicability of the proposed framework is demonstrated through two case studies.
The first is a system responsible for generating susceptibility maps, i.e., maps
that show locations with landslides risks in a given area. The second is a
system where the main goal is to satisfy the users' needs related to travel.
[10_MSc_dias]
Bernardo Quaresma DIAS.
Um mecanismo
de tolerância a falhas para sistemas de gerenciamento de workflow. [Title
in English: A Fault tolerant mechanism for workflow management systems].
M.Sc. Diss. Port. Presentation: 09/10/10. 73 p. Advisors: Renato Fontoura de Gusmão
Cerqueira and Carlos Roberto Serra Pinto Cassino.
Abstract: In this work we propose a mechanism
for failure detection, group management and service replication, providing fault
tolerance for workflow management systems. Workflow management systems require
specific replication features, since such systems deal with non-deterministic
operations and update their s internal state without any external calls. As a
case study we use an industrial automation system and analyze the needed
modifications to use the proposed mechanism and evaluate the impact of the
mechanism in the system s performance.
[09_PhD_karlsson]
Börje Felipe Fernandes KARLSSON.
A model and an
interactive system for plot composition and adaptation, based on plan
recognition and plan generation. [Title
in Portuguese: Um modelo e um sistema interativo para composição e adaptação de
enredos, baseados em reconhecimento e geração de planos].
Ph.D. Thesis. Eng. Presentation: 19/01/10. 157 p. Advisors: Antonio L.
Furtado e Bruno Feijó.
Abstract: This work aims at a model and an
interactive system for plot composition and adaptation, based on a
plan-recognition / plan-generation paradigm. The generated plots must belong to
some chosen genre, to be previously specified in terms of static, dynamic and
behavioural aspects. The modeling technique involves the analysis of plots under
a fourfold perspective, in view of syntagmatic, paradigmatic, antithetic and
meronymic relations between the constituent events. The implemented interactive
system, named LogTell-R, demonstrates the feasibility of the proposed model.
[10_PhD_silva]
Bruno Santana da SILVA.
O uso de casos na reflexão em
ação em atividades de design de IHC. [Title
in English: Using cases in reflection in action in HCI design activities].
Ph.D Thesis. Port. Presentation: 31/08/10. 193 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract: The design process involves investigating the current situation to
define a design problem, to propose an intervention in form of a solution and to
evaluate whether it is satisfactory (Lawson, 2006). Schön (1983) investigates
the design practice epistemology as a process of reflection in action. In this
context, we explore case-based reasoning concepts (Kolodner e Leake, 1996) to
index and recover HCI design cases. An HCI design case can be understood as an
HCI problem and solution definition recorded in representations and models
during the design activity. Schön (1983) argues that a designer can enrich his
reflection in action process when he/she identifies similarities and differences
between the current case and other cases he/she already knows. This thesis
presents the results of a qualitative research study about the effects of
consulting design cases in the reflection in action process of undergraduate and
graduate Computer Science students during HCI design activities. In particular,
we consider a base of cases with contributions of third parties, that is, those
who consulted the cases did not participate in the definition of the problem nor
of the solution. In the observed sessions, consulting existing design cases
enriches the participants’ reflection in action process through conversation
with design artifacts. This result extends Schön’s result considering the
consultation of several design cases experienced by third parties.
Furthermore, we realized that differences and similarities between the proposed
HCI solutions and those consulted depend on the participants’ judgment on
the consulted design cases. This judgment may vary at different moments of a
design process conducted by the same person.
[10_MSc_crestana]
Carlos Eduardo Meger CRESTANA.
A token
classification approach to dependency parsing. [Title
in Portuguese: Uma abordagem por classificação token-a-token para o parsing de
dependência].
M.Sc. Diss. Port. Presentation: 10/03/10. 66 p. Advisor: Ruy Luiz Milidiú.
Abstract: One of the most important tasks in Natural Language processing is
syntactic parsing, where the structure of a sentence is inferred according to a
given grammar. Syntactic parsing, thus, tells us how to determine the meaning of
the sentence from the meaning of the words in it. Syntactic parsing based on
dependency grammars is called dependency parsing. The Dependency-based syntactic
parsing task consists in identifying a head word for each word in an input
sentence. Hence, its output is a rooted tree, where the nodes are the words in
the sentence. This simple, yet powerful, structure is used in a great variety of
applications, like Question Answering, Machine Translation, Information
Extraction and Semantic Role Labeling. State-of-the-art dependency parsing
systems use transition-based
or graph-based models. This dissertation presents a token classification
approach to dependency parsing, by creating a special tagging set that helps to
correctly find the head of a token. Using this tagging style, any classification
algorithm can be trained to identify the syntactic head of each word in a
sentence. In addition, this classification model treats projective and
non-projective dependency graphs equally, avoiding pseudo-projective approaches.
To evaluate its effectiveness, we apply the Entropy Guided transformation
Learning algorithm to the publicly available corpora from the CoNLL 2006 Shared
Task. These computational experiments are performed on three corpora in
different languages, namely: Danish, Dutch and Portuguese. We use the Unlabelled
Attachment Score as the accuracy
metric. Our results show that the generated models are above the average CoNLL
system performance. Additionally, these findings also indicate that the token
classification approach is a promising one.
[10_PhD_sallesneto]
Carlos Soares de SALLES NETO.
Autoria de
documentos hipermídia orientada a templates. [Title
in English: Template based authoring of hypermedia documents].
Ph.D. Thesis. Port. Presentation: 02/09/10. 146 p. Advisors: Luiz Fernando
Gomes Soares and Clarisse Sieckenius de Souza.
Abstract: In the past years, it has increased the demand for hypermedia
applications which relate objects of audio, video, images, etc., in time and
space. A hypermedia application is formally specified in a document. It is usual
for a group a set of applications structurally and semantically similar to each
other as a documents family. Conceptual models for hypermedia authoring are the
basis for documents specification, but they are not satisfactory to define
documents families. This work presents as contribution a method for hypermedia
authoring based in the identification of these families and their instantiation
in order to create new documents. This method addresses the hypermedia document
authoring as a social practice, where specialist authors collaborate to help and
to ease the task of less skilled authors. Another contribution of this
work is to define a language for specification of these documents families,
named TAL (Template Authoring Language), which can be applied in the proposed
method. TAL relies mainly on extending the concept of composition, uusally
present in hypermedia models. In TAL, compositions can be incomplete, with a few
of its internal elements purposely left undefined and with restriction rules on
how these missing elements may be included in the composition. In order to
achieve this language, an empirical study was conducted investigating the
learning and use of Nested Context Language (NCL). The choice of NCL relies on
the fact that it is based in a compositional model that can be extended to
specify documents families, and because it is used by various professional
profiles and not just programmers in the hypermedia document authoring. Thus, a
third contribution due to this work is to provide guidelines to NCL evolution,
especially improving its usability.
[10_MSc_oliveira]
Carlos
Vinicius Sousa de OLIVEIRA.
Mapas de disparidade utilizando cortes de grafo e multi-resolução. [Title
in English: Disparity maps using graph cuts with multi-resolution].
M.Sc. Diss. Port. Presentation: 29/03/10. 43 p. Advisor: Marcelo Gattass.
Abstract: Reconstructing the 3D information of a scene is a common task in
Computer Vision. Stereo matching is one of the most investigated techniques used
to perform this task, which basically consists of, given two images of a
scene seen from different view points, determining corresponding pixels in these
two images and store this information in a disparity map. Several methods have
been proposed to solve the stereo problem keeping good
performance and giving good quality results. This is however a very arduos task
which hardly achieves precise results with low computational power. In this
context, the Graph Cuts method has been very much considered, which aims to
solve the energy minimization problem in polinomial time. In this case the
stereo problem can be modelled as an energy minimization problem and, thus
solved using the Graph Cus technique. in this work we investigate the most
recent and efficient Graph Cuts methods and propose a method for establishing
the correspondences between two images in the context of multi-resolution, in
which a Gaussian pyramid for the input images is built and the Graph Cuts
methods is applied in coarser levels, optimizing the performance and getting
more precise results through the use of the alfa-expansion algorithm. The Graph
Cuts and multi-resolution techniques are reviewed and the results of the
proposed method are presented and evaluated compared to similar methods.
[10_PhD_souza]
Críston Pereira de SOUZA.
Políticas eficientes para revisitação de páginas Web.
[Title
in English: Efficient Web Page Refresh Policies] Ph.D. Thesis. Port.
Presentation: 25/05/10 84 p. Advisor: Eduardo Sany Laber.
Abstract: A search engine needs to continuosly revisit web pages in order to
keep its local repository up-to-date. A page revisiting schedule must be defined
to keep the repository up-to-date using the avaible resources. In order to avoid
web server overload, the revisiting policy must respect a minimum amount of time
between consecutive requests to the same server. This rule is called politeness
constraint. Due to the large number of Web pages, we consider that a revisiting
policy is efficient when the mean time to schedule a revisit is sublinear on the
number of pages in the repository. Therefore, when the politeness constraint is
considered, there are no existing efficient policies with theoretical quality
guarantees. We investigate three efficient policies that respect the politeness
constraint, called MERGE, RANDON and DELAYED. We provide approximation factors
for the repository's up-to-date level for the MERGE and RANDOM policies. Based
on these approximation factors, we devise a 0.77 lower bound for the
approximation factor provided by the RANDOM policy and we present a conjecture
that 0.927 is a lower bound for the approximation factor provided by the MERGE
policy. We evaluate these policies through simulation experiments which try to
keep a repository with 14.5 million web pages up-to-date. Additional experiments
based on a repository with Wikipedia's articles concluded that the MERGE policy
provides better results than a natural grredy strategy. The main conclusion of
this research is that there are simple and efficient policies that can be
applied to this problem, even when the politeness constraint must be respected,
resulting in a small loss of repository's up-to-date level.
[10_MSc_fleischman]
Daniel FLEISCHMAN.
An Improved Exact Method for the UBQP. [Title
in Portuguese: Um Método melhorado para o UBQP] M.Sc. Diss. Port.
Presentation: 04/08/10. 56 p. Advisor: Marcus Vinicius Soledade Poggi de Aragão.
Abstract: Unconstrained Binary Quadratic Programming (UBQP) is widely
studied. It is a powerful modeling tool and its associate problem is NP-hard. In
this work a new approach is introduced, which can be used to build an exact
algorithm. Also, the fundamental idea behind it can be used in an even wider
family of problems. This exact algorithm derived from the new method is highly
parallelizable, which is a desired feature nowadays, when the cloud computing is
a reality. For reasonably large instances of UBQP, the new method can
parallelize to hundreds, or even thousands, of cores easily, with a near-linear
speedup.
[10_MSc_trindade]
Daniel Ribeiro TRINDADE. Técnicas de navegação 3D usando o cubo de distâncias. [Title
in English: 3D Navigation Techniques Using the Cube Map] M.Sc. Diss. Port.
Presentation: 26/03/10. 80 p. Advisor: Alberto Barbosa Raposo.
Abstract: The use of 3D viewers is becoming common in several activities.
The appearance of new technologies, with the resulting increase in processing
power, made possible the creation of larger and richer 3D virtual environments.
However, the navigation in 3D environments, especially the multiscale ones, is
still a problem for many users. The goal of this work is to propose solutions to
some 3D navigation problems in order to improve the user experience with
this kind of application. In this sense, techniques to automatically adjust the
navigation speed, the clipping planes and the rotation center are presented. It
is also proposed a solution for the detection and treatment of collision
between the camera and the scene, and a technique that aims to prevent users
from getting lost when no scene object is visualized. These solutions are based
on the construction and maintenance of a structure called cube map, which
provides information about the spatial location of the scene points relative to
the camera. Currently in development at Tecgraf/PUC-Rio, the SiVIEP (Integrated
Visualization System for Ezploration and Production) is a viewer aimed at
professionals in the area of oil exploration and production that was used to
detec and understand the mentioned problems, and also for validating the
implemented solutions.
[10_MSc_santos]
Danielle Loyola
SANTOS.
Um modelo de operações para aplicações na Web semântica. [Title
in English: An operation model for semantic Web applications].
M.Sc. Diss. Port. Presentation: 6/01/10. 101 p. Advisor: Daniel Schwabe.
Abstract: Operation Model aims to describe the application's business logic
through additional modeling primitives to represent operations. The Operation
Model classes represent the definition of what might be an operation on an
application, i.e. which values should be provided for its execution and which
values should be returned as a result of an execution. the main motivation for
the development of this work is the fact there are no models in the literature
specifying operations semantically, integrated with other models of the
hypermedia applications development methods. SHDM method was extended to include
this new model.
[10_PhD_silva]
David Sotelo Pinheiro da SILVA.
On the permutation flow shop scheduling problem. [Title
in Portuguese: Sobre o problema de escalonamento permutation flow shop].
Ph.D. Thesis. Port. Presentation: 06/05/10. 80 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão.
Abstract: In the last fifty years, the
Permutation Flow Shop Scheduling Problem with makespan minimization (PFS) has
been a central and well-studied problem in combinatorial optimization community,
known by its intractability from theoretical and computational aspects. In
this work, three major contributions were obtained for the PFS problem. The
first one is an approximation algorithm for the PFS problem with n jobs
in m machines. This algorithms achieves an approximation garantee o
O(√ n+m) and runs in linear time. This is the best performance ratio
already obtained for the PFS problem in the case of n=Θ(m). Furthermore,
a novel connection between PFS and monotone subsequence problems is established,
resulting on an extension of the Erdös-Szekeres theorem to weighted monotone
subsequences. The second result is a faster algorithm for the PFS with
n jobs and two machines (2-PFS). We give a O(n log k) algorithm that
determines optimal solutions for the 2-PFS problem, where k ≤ n is
the minimum number of cliques necessary to cover the nodes of an underlying
interval graph. From the best of our knowledge, this is the first
improvement upon the O(n log n) time complexity of the classical
algorithm from Johnson. The third contribution of this work is a
new family of competitive deterministic heuristic for the PFS problem.
Four new heuristics are introduced as extensions of the classical NEH heuristic.
Such heuristics are based on pruning techniques on the implicit enumeration tree
of the PFS problem. Computational results attest that the new proposed
method stand among the most effectives for the PFS problem.
[10_MSc_suescun]
Elizabeth SUESCUN MONSALVE.
Construindo um jogo educacional com modelagem
intencional apoiados em princípios de transparência. [Title
in English: Building an educational game with intentional modeling supported on
principles of transparency].
M.Sc. Diss. Port. Presentation: 25/03/10. 196 p. Advisor: Julio Cesar Sampaio
do Prado Leite.
Abstract: Educational games have been proposed for
teaching computer science, and software engineering as well. This work presents
an approach for intentional modeling supported by concepts of transparency
towards the implementation of the educational game SimulES. SimulES is a game
for helping software engineering teaching. The approach is innovative in that
context. We believe that intentional modeling is akin to game modeling, since it
allows us to represent the interaction and collaboration among the actors as
well concepts of transparency. The intentional model we produced was used to
develop the software that implements SimulES-W, a Web based version of the game.
[10_MSc_oliveira]
Felipe Nogueira Barbará de Oliveira. Aplicação adaptativa de guia eletrônico
utilizando o Ginga-NCL. [Title
in English: Adaptive Electronic Guide Aplicattion based on Ginga-NCL].
M.Sc. Diss. Port. Presentation: 23/08/10. 104 p. Advisor: Luiz Fernando
Gomes Soares e Rogério ferreira Rodrigues.
Abstract: One of the
consequences of the digitalization of TV systems is the increased amount of
available channels and, as a consequence, the great number of services that can
be offered to viewers. Due to the great number of content available, there has
been a need for applications responsible for helping viewers to find what they
want to watch. These applications are called EPGs (Electronic Program Guides).
Most work related with EPG focuses either on the development of recommendation
systems or on the design of EPG user interfaces. A recommendation system
integrated with an EPG adapts the information to be presented based on the
viewer’s preferences. On the other hand, the EPG application is responsible for
gathering information and generating the EPG. Usually this EPG application can
only be replaced by sporadic updates. Unfortunately, as far as the author knows,
there is no work that offers support for application adaptations in real-time,
which would make it possible to change algorithms without stopping the EPG
presentation. This dissertation discusses the importance of providing real time
adaptations and presents an EPG implementation based on the support offered by
Ginga-NCL. The application modular architecture provides support to dynamic
adaptations through a metaservice responsible for these tasks.
[10_MSc_rabelo]
Flávia Andrade RABELO.
Uma sistemática baseada em data warehousing para apoio ao
Governo Eletrônico. [Title
in English: A systematic based on data warehousing to support the e-Government].
M.Sc. Diss. Port. Presentation: 30/08/10. 123 p. Advisor: Rubens Nascimento
Melo.
Abstract: Aiming to adapt to the new way of relationship beteween
government and society, governments are faced with the growing need to offer
easy access to public information to a public that increases everyday. To
contribute to this purpose, the main objective of this work is to develop a
system based on Data Warehousing to support the electronic government. Thus, the
objective is to offer a service of access to the analytical information to
society via Internet and low cost. As a proof of concept of the
systematical proposal, it will be used real data from statistical service (PNAD),
available on the Web. The systematic application of these data generates
an analytical access service to information on the Web (in cubes for analysis).
This approach follows the SaaS pattern ( Software as a Service), that is, the
user may use this service without having to create and maintain local
infrastructure for this analysis. The use of several advanced technologies
guided by that systematic, aims to contribute to best practices for e-government
as well as the effective provision of a "demo" version of a cube for PNAD data
analysis as a service on the Web.
[10_MSc_souza]
Guilherme Schirmer de SOUZA.
Renderização de pele humana em tempo real para
jogos. [Title
in English: Real time skin rendering for games].
M.Sc. Diss. Port. Presentation: 24/08/10. 101 p. Advisor: Bruno Feijó.
Abstract:
Skin rendering is a fundamental research topic for the digital entertainment
industry. Realistic results are very challenge to obtain, especially for real
time applications. In this dissertation, two skin rendering techniques are
studied and developed to simulate light behavior through human skin. Both
techniques are based on physic and empiric models and use texture space in GPU
to reproduce diffuse illumination and subsurface scattering in real time. This
dissertation compares these two techniques and gives guidelines for the
implementation of a skin rendering module in 3D game engines.
[10_MSc_honorato]
Gustavo de Sá Carvalho HONORATO.
NCL-Inspector – uma ferramenta para análise de
código NCL. [Title
in English: NCL-Inspector – a NCL static code analysis tool].
M.Sc. Diss. Port. Presentation: 25/03/10. 120 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract: Ginga-NCL and the NCL language were selected
as a recommendations as the interactive multimedia environment and language for
IPTV by the International Telecommunication Union (ITU). In order to promote the
use of these technologies, it is necessary to create tools to help developing
applications using NCL. At present, the support provided by tools for NCL
development is quite limited. There are only few available systems that can
create NCL applications. These softwares do not help developers in assessing the
quality of the NCL code. In this dissertation, we propose NCL-Inspector, a
critique system of the NCL code, which aims at leveraging the developer's skills
in detecting error-prone NCL applications. Also, we specified the requirements
for critiquing systems for source code inspection.
[10_MSc_mourad]
Gustavo Lopes MOURAD.
Um framework para a construção de mediadores oferecendo
eliminação de duplicatas. [Title
in English: A framework for the construction of mediators offering deduplication].
M.Sc. Diss. Port. Presentation: 14/09/10. 69 p. Advisor: Karin Koogan
Breitman.
Abstract: As Web applications that obtain data from different
sources (Mashups) grow in importance, timely solutions to the duplicate
detection problem become central. Most existing techniques, however, are based
on machine learning algorithms, that heavily rely on the use of relevant,
manually labeled, training datasets. Such solutions are not adequate when
talking about data sources on the Deep Web, as there is often little information
regarding the size, volatility and hardly any access to relevant samples to be
used for training. In this thesis we propose a strategy to aid in the extraction
(scraping), duplicate detection and integration of data that resulted from
querying Deep Web resources. Our approach does not require the use of
pre-defined training sets , but rather uses a combination of a Vector Space
Model classifier with similarity functions, in order to provide a viable
solution. To illustrate our approach, we present a case study where the proposed
framework was instantiated for an application in the wine industry domain.
[10_MSc_souza]
Gustavo Soares Souza.
Sobre a Engenharia Semiótica da interação com sistemas de
monitoração. [Title
in English: About the Semiotic Engineering of interaction with monitoring
systems].
M.Sc. Diss. Port. Presentation: 29/06/10. 108 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: Monitoring Systems of both Internet applications and
services are, nowadays, a rich source of information and an agile decision
support tool for System Administrators. In environments with a high access
volume and hundreds of servers, to communicate abnormal events in an interface
accessible through a Web browser is considered a big interface design and
interaction challenge, particularly in situations where information and
decision-making are critical. Given that, the main focus of this work is to
study monitoring systems in a big Brazilian Internet company. We begin with a
detailed evaluation of systems used in the company and then we elaborate,
implement and evaluate a new design model that we propose for similar monitoring
systems. This new model demonstrates how the Semiotic Engineering, a semiotic
theory of HCI, can be combined with elements of a human-factor and ecological
psychology design theory, the Ecological Interface Design, in order to elaborate
the communicability and build interaction for monitoring systems. The
combination of both approaches provides certain characteristics of
representation and communication that, according to the evaluations conducted in
this research, the decision-making support becomes both increasingly efficient
and agile for the users in such domain.
[10_MSc_encarnacao]
Hildebrando Trannin da ENCARNAÇÃO.
NCLite: explorando o conceito de cenas
interativas em ferramentas de autoria para TV digital. [Title
in English: NCLite: Exploring interactive scenes concepts in Digital TV
authoring tools] M.Sc. Diss. Port. Presentation: 02/07/10. 124 p. Advisor:
Simone Diniz Junqueira Barbosa.
Abstract: Digital TV enables interaction
in television programs. However, authors who want to produce this kind of
application have to learn programming languages, such as NCL, the declarative
language of the Brazilian standard for digital TV. Authoring tools can provide
us with adequate abstractions that facilitates the authoring process. however,
nowadays we don't have authoring tools that do so for the NCL language. In this
dissertation, we present a tool that allows authors to visualize and edit
content whitout knowing NCL. This tool accelerates and makes easier the
authoring process of Digital TV applications.
[10_MSc_saldanha]
Hugo Marques de Castro SALDANHA.
Utilizando anotações em linguagens orientadas a
objetos para suporte à programação orientada a componentes. [Title
in English: Using annotations in object oriented languages to support component
oriented programming].
M.Sc. Diss. Port. Presentation: 27/08/10. 74 p. Advisor: Renato Fontoura de
Gusmão Cerqueira and Maria Júlia Dias de Lima.
Abstract: In
component-based distributed systems, the use of object-oriented programming
languages is very common to define, through frameworks, programming interfaces
for building and using components. However, most programming models that follow
this approach, use the own object-oriented language concepts, such as classes
and interfaces, to define a programming interface that follows a
component-oriented paradigm. As a consequence, the source code mixes component
functionality aspects with the programming model specific implementation
mechanisms, what prevents the reuse of this component in other frameworks, and
moreover, includes an extra complexity in the source code. Recently, a tendency
to the addition of meta-data to the component implementation has been observed,
by the use of specific markings on the source code. These meta-data provide the
necessary information for tools, based on code generation or based on reflection
mechanisms, to perform the integration of the component implementation with the
component model’s infrastructure support. Some authors call this technique as
Attribute-Oriented Programming. Languages as Java and C# already provide native
support to this technique through Annotations. The goal of this dissertation is
investigate the adoption of the attribute-oriented programming technique with
object-oriented languages to build components based applications. As part of the
research, we developed a new programming mechanism based on attributes to the
Java version of the middleware SCS.
[10_MSc_jabour]
Iam Vita JABOUR.
O Impacto de atributos estruturais na identificação de tabelas
e listas em documentos HTML. [Title
in English: The impact of structural attributes to identify tables and lists in
HTML documents].
M.Sc. Diss. Port. Presentation: 25/11/10. 64 p. Advisor: Eduardo Sany Laber and
Raul Pierre Renteria.
Abstract: The segmentation of HTML documents has
been essential to information extraction tasks, as showed by several works in
this area. This paper studies the link between an HTML document and its visual
representation to show how it helps segments identification using a structural
approach. For this, we investigate how tree edit distance algorithms can find
structural similarities in a DOM tree, using two tasks to execute our
experiments. The first one is the identification of genuine tables where we
obtained a 90.40% F1 score using the corpus provided by (Wang e Hu, 2002). We
show through an experimental study that this result is competitive with the best
results in the area. The second task studied is the identification of product
listings in e-commerce sites. Here we get a 94.95% F1 score using a corpus with
1114 HTML documents from 8 distinct sites. We conclude that algorithms to
calculate trees similarity provide competitive results for both tasks, making
them also good candidates to identify other types of segments.
[10_PhD_santos]
Ismael Humberto Ferreira dos Santos.
A
collaborative environment for offshore
engineering simulations based on visualization and workflow. [Title
in Portuguese: Um Ambiente colaborativo para simulações em Engenharia Offshore
baseado em visualização e workflow].
Ph.D. Thesis. Port. Presentation: 09/04/10. 145 p. Advisors: Marcelo Gattass
and Alberto Raposo Barbosa.
Abstract: Deep water production systems,ncluding
floating productio units (platfoms or ships) and all the equipment playing a
part in the production process, are currently designed by means of complex
computational modeling systems. Those systems invlove the areas of
structural calculus, meteo-oceanography (currents, waves and wind forces),
hydrodynamics, risers (rigid and flexible steel pipes for carrying olil from the
well in subsurface up to the production unit), mooring systems, submatine
equipment, seabed foundations and geological/Geotechnical rislk assessment.
The project of a new production unit is a lengthy and expensive process, that
can last many years and cosume hundreds os million dollars depending on the
complexity of the unit and how mature is the technology developed to make the
project technically and economically feasible. Projects are conducted by
diverse specialiss, sometimes geographically distributed, yelding independent
but highly interrelated artifacts and results. The need for collaboration
in an inherent characteristc of deep-water folating production unit projects.
The possibility to share information among users, control the execution of
different modeling tools, visualize and manipulate virtual 3D models in
immersive Virtual Reality (VR) environments is pushing the limits of teamwork
activities in oil & gas industry especially in Offshore Engineering. The
objective of this thesis is to establish the fundamental principles and address
the main issues in the development of a Collaborative Environment for
Engineering, named CEE (Collaborative Engineering Environment), in order to
allow the collaborative visualization and interpretation of simulation results
produced in engineering projects, which in general also involve different
specialties. Due to the multi-disciplinary characteristic of those projects,
collaborative visualization becomes a
key component during the life cycle of engineering projects, especially those in
Offshore Engineering, used in this work as case of study. We propose an
integrated collaborative environment to be used by project engineers' teams
during the execution and control of complex engineering projects, as is the case
of the projects of deep-water floating production units. The system requirements
were carefully compiled aiming to enable an effective collaboration among the
participants, creating a suitable environment for discussing, validating,
interpreting and documenting the results of the simulations executed during the
different phases of an engineering project. To further improve the
interpretation capacity and a better comprehension of results the support for
for immersive 3D visualization is also available in the visualization tool,
especially tailored for the
Offshore Engineering domain. In order to meet these goals, we devise a
Service-Oriented Architecture (SOA) for CEE. This architecture is composed of
the integration of different technologies of computer Supported Collaborative
Work (CSCW), Virtual Reality (VR) and Grid Computing (GC). We use a Scientific
Workflow Management System (ScWfMS), based on BPEL (Business Process Execution
Language), a Grid-enabled software infrastructure for executing engineering
simulations, and a Video Conferencing system (VCS) to furnish audio and video
collaboration. For visualizing the results, a VR visualization tool, specialized
for Offshore Engineering, ENVIRON, has also been developed in conjunction with
the PUC-Rio/TecGraf team.
[10_MSc_santos]
Jefferson de Barros SANTOS.
Infraestrutura para provadores interativos de
teoremas na Web. [Title
in English: Infrastrucutre for Web-based interactive theorem provers].
M.Sc. Diss. Port. Presentation: 24/03/10. 89 p. Advisor: Edward Hermann
Haeusler.
Abstract: Automatic theorem proving consists of proving
mathematical theorem by means of computer programs. Depending on the logic used,
the process of proving a formula is not computable. Moreover, depending of the
deductive system applied to, the search for a proof can involve the application
of long sequences of axioms and inference rules, reinforcing the need of human
intervention in the proof process. Such systems are known as interactive theorem
provers or proof assistants. In a typical scenario, the user interacts with the
prover through a graphical interface, usually a desktop application. Recently,
however, applications like those started to be delivered to users through the
web. This way of software deployment avoids that final users have to deal with
complex activitieslike prover installation and configuration and allows this
user to access the system from different machines with a simple Internet
connection. In this research we study the use of web as a platform for
interactive theorem proving environments construction. Our purpose is to study
some interaction models between user and automated proof environments and verify
how these models can be adapted to work as a web application. As a result we
show a graphical tool for visualization and direct manipulation of formal proofs
on web to work as an alternative interface between user and proving machines.
[10_PhD_vasconcelos]
José Eurico de VASCONCELOS FILHO.
Um modelo de suporte ao design
baseado no rationale: relacionando espaço de problema ao espaço de solução no
design. [Title
in English: A design model based on rationale: relating problem space to
solution space of design].
Ph.D. Thesis. Diss. Port. Presentation: 27/08/10. 124 p. Advisor: Simone Diniz
Junqueira Barbosa and João José Vasco Peixoto Furtado.
Abstract: The
design of interactive systems is a complex, interactive and collaborative
process, composed of different activities that are interrelated in the
composition of a final product and requires different expertise to perform it.
The information produced during each activity is very important for the
continuity of the process and to understand, explain and maintain the product
produced. This information must be related and integrated in a clear and
coherent speech, allowing traceability and addressing the origin and the context
in which they were produced. It is noted, however, that the proposed
process catch and modeling in HCI design provide a fragmented and/or incomplete
view of this process. For this reason, we propose an epistemic model,
based on Design Rationale (DR), suitable for the recording and modeling of the
design of interactive systems. Based on the review and analysis of the key
views and proposals of the design process, their requirements, models, notations
and support languages, the epistemic model Idea has been proposed. The
model has as its main goal to register and report design activities in a
cohesive manner, offering support for the reflection of the design team about
the activities of the design process as well as information sources for the
traceability of information. For this, the model proposes the integration
of inputs and outputs (e.g. requirements, models, artifacts) of the activities
of analysis (problem space) and the conceptual design (solution space) DR.
The model was implemented in the Deprost prototype and adopted in the design of
part of the WikiMapps project, allowing us to evaluate the proposal in real case
study.
[10_MSc_canepa]
Katia Fabiola CANÉPA VEGA.
TREG: un juego de entrenamiento en ingeniería de
requisitos. [Title
in Portuguese: Um jogo de treinamento em engenharia de requisitos].
M.Sc. Diss. Esp. Presentation: 26/02/10. 133 p. Advisor: Hugo Fuks and Gustavo
Robichez de Carvalho.
Abstract: REG is a game for training in
Requirement Engineering, specifically in the Workshops techniques. It was
created in Second Life using its building and scripting possibilities. This
works presents an exploration in the use of a
prototyping process and techniques for developing the game. The prototyping
process of the book "Effective Prototyping for Software Makers" is an iterative
process which was customized for the development of the game in this virtual
world. Branching Stories is a simulation genre that gives an overview of all the
possible paths the player can take in TREG. Scenario is a Requirements
Engineering technique used for the specification of the simulations modeled in
the Branching Stories graph. The design of TREG used the Scenarios
specifications for modeling the software perspectives. The state machine
diagrams shows the dynamic behavior of the TREG objects, a class diagram
represents the objects created in Second Life and the communication diagrams
shows the relationships and the flow of messages between objects. This work also
shows the results of the evaluation made and the difficulties presented by the
participants whereas TREG was used.
[10_MSc_leal]
Kelly Azevedo Borges LEAL.
Relato de experiência da implantação de boas práticas
de Engenharia de Software em um ambiente heterogêneo. [Title
in English: Reporting the experience of deployment of Software Engineering best
practices in a heterogeneous environment].
M.Sc. Diss. Port. Presentation: 19/08/10. 118 p. Advisor: Arndt von Staa.
Abstract:
Agile methodology is a recent Software Engineering strategy (it became popular
in 2001) that was created to be flexible and adaptable enough - characteristic
very import to deal with constant requirements changes during a project
lifecycle. Scrum, the agile methodology used during this study, requires an
adequate infrastructure in order to offer its benefits. This study implemented
Scrum and an adequate infrastructure of processes and tools to suport it, inside
a organization that didn't use any kind of project management before. With the
establishment of some goals using the GQM model, some improvements were expected
and results are detailed in this study.
[10_MSc_silva]
Leandra Mara da SILVA.
Uma abordagem sensível à história para detecção de
anomalias de código. [Title
in English: A history sensitive approach to code anomaly detection].
M.Sc. Diss. Port. Presentation: 23/08/10. 120 p. Advisors: Carlos José Pereira
de Lucena and Alessandro Fabrício Garcia.
Abstract: The
modularization of the code can directly influence the software maintainability.
Therefore, researchers have proposed mechanisms to contribute to the
identification of potential modularity anomalies in source code. In this context,
a mechanism based on metrics that have been widespread are the detection
strategies. Commonly, they are based on metrics that consider only properties of
isolated versions of the systems. However, recent studies have reported that
these strategies have been considered counter-productive. Our research is
related to the investigation of the possible benefits of considering information
about the code evolution to detect anomalies. In this context, this paper
proposes a set of metrics and detection strategies that consider historic
properties of the code evolution. Furthermore, a measuring and assessing tool to
support the approach was also developed. This tool allows the declarative
specifcation of different detection strategies through a domain-specific
language. This fact meets the particular needs of developers in setting
strategies and dispose to researchers an opportune environment to detection
strategies experimentations. Also as a contribution of this research is
presented an evaluation of strategies in terms of precision and recall in two
systems of different domains. Results of conventional strategies and of
history-sensitive strategies are compared in detections of classical modularity
flaws, such as God Class, Divergent Change and Shotgun Surgery in a total of 16
versions of these systems. Results of this study showed that the use of
information related to the code evolution can provide important contributions to
detect design flaws in code.
[10_MSc_lachtermacher]
Luana LACHTERMACHER.
O uso de tabelas de decisão para a automação da geração e
da execução de casos de teste. [Title
in English: Using decision tables to automate the generation and execution of
test cases].
M.Sc. Diss. Port. Presentation: 25/03/10. 105 p. Advisor: Arndt von Staa.
Abstract: Testing is a very important area in the software development.
However, this area needs more effective tools with a higher level of automation
with are more comprehensive than the tools available today. A large part of the
techniques that do generation of test cases use explicity or implicitly decision
tables as an auxiliary tool. This dissertation aims to develop a semi-automatic
process that has as outputs test suites that were generated using decision
tables. These suites must be appropriate for the test automation tools. To
achieve these goals were
implemented: (i) an decision table editor, (ii) a automatic generator case test,
and (iii) an automatic test scripts generator of the FEST framework. After it
was necessary evaluate the benefits that these tools could bring for the test
area, both in the planning (generation of valued test cases from semantic test
cases) and in the execution of test cases.The evaluation was based on a series
of examples involving specific elements of human interfaces, and also in
application to real software.
[10_PhD_machado]
Lucas Euzébio
MACHADO.
Parallel algorithms for multicore game engines. [Title
in Portuguese: Algoritmos paralelos para motores de jogos em multiprocessadores].
Ph.D. Thesis. Eng. Presentation: 19/03/10. 70 p. Advisor: Bruno Feijó.
Abstract: This thesis presents several techniques about parallel technology
on electronic games. The thesis begins presenting several possible architectures
for a game engine. A new architecture is presented, more flexible and
adequate for the processors of the future that will have a higher level of
parallelism. Following, a new technique for processing an octree, a classic data
structure for computer graphics, is presented. the last techniques presented are
related to collision detection. New techniques for processing hierarquical grids
and balancing collision detection on a set of objects are presented.
[10_MSc_teixeira]
Lucas Pinto
TEIXEIRA. Local SLAM — localização de câmera e mapeamento local de ambientes
simultâneos. [Title
in English: Local SLAM — simultaneous camera localization and local environment
mapping].
M.Sc. Diss. Port. Presentation: 25/03/10. 66 p. Advisors: Marcelo Gattass e
Alberto Barbosa Raposo.
Abstract: Nowadays, vision systems is portable
computers are becoming an important tool for personal use. Vision systems for
object localization are an active are of research. This dissertation proposes an
algorithm to locate position and objects in a regular environment with the use
of a simple webcam and a personal computer. To that end, we use two algorithms
of marker tracking to reboot often a Visual Simultaneous Localisation and
Mapping algorithm. This dissertation also presents an implementation and a set
of tests that validate the proposed algorithm.
[10_MSc_caldeira]
Luiz Rodolfo Neves CALDEIRA.
Geração semi-automática de massas de testes
funcionais a partir da composição de casos de uso e tabelas de decisão. [Title
in English: Semi-automatic generation of functional test scripts by composing
use cases with decision tables].
M.Sc. Diss. Port. Presentation: 23/08/10. 98 p. Advisor: Arndt von Staa.
Abstract:
This work aims at developing a process and tools for
the semi-automatic generation of functional test scripts for web based systems.
The process and tools depart from use cases and decision tables, in order to
produce high quality automated tests as well as to reduce the time spent
generating them. The tests specifications are provided by use cases written in
semi-structured Portuguese and obeying a well defined structure. By means of a
tool, decision tables are manually built from the use case descriptions.
Afterwards semantic test cases are automatically generated from these decision
tables. Another tool generates executable test scripts from these test cases.
The generated test scripts must suit the tool used for automated testing. In
this work, the Selenium tool was used for automating test interaction with the
browser. The evaluation of the efficacy of the process and tools was performed
applying them to a real system and comparing the result with traditional
techniques of automated test generation regarding this same system.
[10_MSc_abreu]
Manoel Teixeira de ABREU NETTO.
Um Framework baseado em padrões para a contrução
de sistemas multi-agentes auto-organizáveis. [Title
in English: A pattern-based framework to build self-organization multi-agents
systems].
M.Sc. Diss. Port. Presentation: 29/03/10. 81 p. Advisor: Carlos José Pereira
de Lucena.
Abstract: The approach of self-organizing systems has
increased in relevance and use within complex domains, for it allows the
development of decentralized systems that exhibit a dynamic and adaptable
behavior in facing the challenge of handling disturbances in the environment,
which were previously unknow. The main difficulties in building self-organizing
systems lie in the development of mechanisms of interaction and coordination
between the agents of the environment and the lack of reuse of solutions already
adopted. In this context, this dissertation proposes a framework as a reusable
solution for building decentralized self-organizing systems, based on major
architectural patterns found in the literature, and also provide a means of
extensibility to develop new mechanisms of interaction and coordination. From
the framework, instances of various fields can be created, for example, a
self-organizing and decentralized solution to the automated guided vehicles
problem, as will be presented in this dissertation.
[10_MSc_azevedo]
Marcelo Cohen de AZEVEDO.
Gerador de aplicações para consultas a bases RDF/RDFS. [Title
in English: An application builder for quering RDF/RDFS datasets].
M.Sc. Diss. Port. Presentation: 07/01/10. 134 p. Advisor: Daniel Schwabe.
Abstract: Due to increasing popularity of the semantic web, more data stes,
containing information about varied domains, have become available for access in
the Internet. This thesis proposes a tool to assist accessing and exploring this
information. This tool allows the generation of applications for querying
databases in RDF and RDFS through programming by example. Users are able to
create use cases through simple operations using the RDFS model. These use cases
can be generalized and shared with other users, who can reuse them. The shared
use cases can be customized and extended collaboratively in the environment
which they were developed. new operations can also be created and shared, making
the tool increasingly more powerful. Finally, using a set of use cases, it's
possible to generate a web application that abstracts the RDF model where the
data is represented, making it possible for lay users to access this information
without any knowledge of the RDF model.
[10_MSc_oikawa]
Marcelo OIKAWA.
Conversão de regexes para parsing expression grammars. [Title
in English: Converting regexes to parsing expression grammars].
M.Sc. Diss. Port. Presentation: 25/08/10. 71 p. Advisor: Roberto
ierusalimschy.
Abstract: Regular expressions are a formalism used to
describe regular languages and form the basis of several pattern-matching
libraries. However, many interesting patterns either are difficult to describe
or cannot be described by pure regular expressions. Because of these limitations,
modern scripting languages have pattern matching libraries based on regexes, ie,
extensions of regular expressions mainly composed by a set of ad-hoc
constructions that focus on specific problems. Although very useful in practice,
these implementations are complex and distant from the original formalism of
regular expressions. Parsing Expression Grammars (PEG) are a formal alternative
to recognize patterns and it is much more expressive than pure regular
expressions and does not need use ad-hoc constructions. The goal of this work is
to study the convertion of regexes to PEGs. To accomplish this task, we studied
the current implementations of regexes and show how to convert some
constructions to PEGs. Finally, we present an implementation that convert
regexes to PEGs for the Lua language.
[10_MSc_santos]
Marcio David de Magalhães SANTOS. Estendendo a ferramenta SAFE para JBOSS AOP. [Title
in English: Extending the SAFE tool for JBOSS AOP].
M.Sc. Diss. Port. Presentation: 20/08/10. 147 p. Advisor: Arndt von Staa and
Roberta de Souza Coelho.
Abstract: Aspect Oriented Paradigm (AOP) is used
in many frameworks and applications in order to improve the modularity and
separation of concerns. However, the combination of AOP and exception handling
mechanisms may increase the number of error-prone scenarios. AOP may raise
exceptions which the application was not designed to handle it. This
dissertation presents (i) an empirical study showing how the AOP affects
exception handling in an application that uses JBoss AOP to implement AOP, and
(ii) a support tool for the study. The study shows that error-prone scenarios
occur mainly because exception handling exceptions are caught by subsumption.
[10_PhD_moreno]
Márcio Ferreira MORENO.
Conciliando flexibilidade e eficiência no
desenvolvimento do ambiente declarativo Ginga-NCL. [Title
in English: Bringing together flexibility and efficiency in the development of
the Ginga-NCL declarative environment].
Ph.D. Thesis. Port. Presentation: 16/08/10. p. Advisor: Luiz Fernando
Gomes Soares.
Abstract: Digital TV (DTV) systems are defined by a set of
specifications that establish the technologies involved in the counter encoding,
transmission, reception and presentation, including the specification of
applications (non-linear programs), their various related media objects and
metadata. In this scenario, support to applications is accomplished through an
intermediary software layer, or middleware, positioned, in the receiving
environment, between the application code and the execution infrastructure
(hardware platform and operating system). The middleware design and
implementation bring a number of challenging issues. Among them are:
efficient resource management, since resources are ususally scarce in DTV
receiver devices; support to dynamic evolution of the middleware functionalities;
support to dynamic evolution of the middleware functionalities; support to fault
recovery at runtime; the mecahnisms for resource location management, allowing
using the same syntax used in authoring environment the different receiver
environments; support to live editing of non´linear programs (i.e. applicationns);
the infrastructure definition for the asynchronous transport of interactive
applications and control commands; and the life cycle control of interactive
applications to be started, paused and resumed at any point in their life
duration, without loosing their evolution history. Most of these issues
are addrssed in the existing systems, however with important limitations; some
of them are not even addressed , being only treated with workaround tricks.
This work proposes alternative solutions to the mentioned issues and
incorporates the solutions in the Ginga-NCL declarative middleware specification
and in its reference implementation. Ginga-NCL and its declarative
language NCL are adopted by SBTVD-T in 2007. In early 2009, Ginga-NCL and
NCL have become part of the ISBD-TB standard and part of
the ITU-R BT 1699 Recommendation. Even in early 2009, Ginga-NCL and NCL
have become the ITU-T H.761 Recmmendation for IPTV services.
[10_MSc_cunha]
Marcio Luiz Coelho CUNHA.
Redes sociais dirigidas ao contexto das coisas. [Title
in English: Context driven things social networks].
M.Sc. Diss. Port. Presentation: 27/08/10. 72 p. Advisor: Hugo Fuks.
Abstract:
Every day, more and more Brazilians have a next generation mobile phone with an
Internet connection. These new devices are able to read different types of
labels used to store, retrieve and manage information; they are with us
everywhere to support our daily tasks. The small computers are aware of
their surrounding and propitious to communication and collaboration with the
real world. Due to their popularty, availability and critical mass of users
reached, new services are developed based on the concept of ubiquitous
computing, where computers and humans are unified around the concept of
environment. These systems deal with issues of pervasive interaction of
context, recognition of environments and adapt according to preferences. This
thesis presents a description of the developemnt and usability testing of a
social network that is based on the concepts of ubiquitous computing and
Internet of Things. This social network, adddressed to the theme of
enogastronomy, is accesible by mobile devices and uses two-dimensional codes
pasted on the bottles of wine for using the software and the phone's camera to
bring information in accordance with the context of the object, place and user
preference.
[10_MSc_gomes]
Pablo Frias de
Oliveira Bioni GOMES. Extração de primeiro plano em imagem HD com fundos
variados. [Title
in English: Foreground extraction from HD images with any type of background].
M.Sc. Diss. Port. Presentation: 16/03/10. 55 p. Advisor: Bruno Feijó.
Abstract:
The film and broadcast industry have been massively using the Chroma key
technique, also known as Blue Screen Matting. This technique deeply transformed
the entertainment industry, allowing impossible scenes become reality. The
evolution of this technique allowed that complex productions could have better
control and lower costs. However, this technique needs a sequence of preparation
stages, which require high budgets and precise planning. furthermore, continuity
errors usually cause serious post-production problems. Currently, the
entertainment industry is searching for other matting techniques that work on
any kind of background. The use of these techniques is still restricted to
academic works and softwares of still image manipulation. The present work has
the goal of making an analysis of the current chroma key processes and aims to
propose a matting technique over any type of background in High Definition
images. Two methods of calculating alpha values are presented: a local method
based on clusters and a local one based on electric potential.
[10_MSc_moraes]
Pedro
Luchini de MORAES.
Motion synthesis for non-humanoid virtual characters. [Title
in Portuguese: Síntese de movimentos para personagens virtuais não-humanóides].
M.Sc. Diss. Port. Presentation: 05/03/10. 49 p. Advisor: Bruno Feijó.
Abstract:
We present a technique for automatically generating animations for virtual
characters. The technique is inspired by several biological principles,
especially evolution and natural slection. The virtual characters themselves
are modeled as animal-like creatures, with a musculoskeletal system that is
capable of moving their bodies through simple physics principles, such as force
and torques. because our technique does not make any assumptions about the
structure of the character, it is capable of generating animations for any kind
of virtual creature.
[10_MSc_marinho]
Rafael Savignon MARINHO.
Ginga-NCL como plugin para navegadores Web. [Title
in Portuguese: Síntese de movimentos para personagens virtuais não-humanóides].
M.Sc. Diss. Port. Presentation: 19/10/10. p. Advisor: Luiz Fernando Gomes
Soares.
Abstract:
Over the past few years the Web (World Wide Web) users have been presenting a
significant change on their behavior, becoming, beside consumers, multimedia
content producers. On the other hand, the Brazilian Digital TV content
production, more especially the interactive applications written in NCL (Nested
Context Language) are still made by professionals allocated in TV broadcasters
and content producers companies. Considering the new Web users profile and the
fact that NCL is a hypermedia programming language whose scope is not restricted
to Digital TV application the opportunity to popularize the use of NCL arises as
a new way to specify multimedia content also in the Web. Moreover, once NCL is
recommend by ITU-T as reference to IPTV service, is reasonable to realize a new
platform to deploy such services on the Web. Motivated by this new scenario this
work proposes the adaptation of Ginga-NCL declarative middleware, which is the
software layer in charge to execute of the NCL application, to the Web
environment. The proposed adaptation aim to offer the content presentation
control, live editing support and synchronization among media objects from both
domains (Web and Interactive TV). In short, the presented work discusses how a
Web page can be benefit by the internal player API and other features offered by
the middleware Ginga. In addition is also proposed a new platform support for
the middleware in order to facilitate adaptation process.
[10_MSc_rgomes]
Raphael do Vale Amaral GOMES.
MatchMaking - uma ferramenta para alinhamento de esquemas OWL. [Title
in English: MatchMaking - a tool to match OWL schemas].
M.Sc. Diss. Port. Presentation: 05/04/10. 73 p. Advisor: Marco Antonio
Casanova.
Abstract: A database concectual schema, or simply a schema, is
a high level description of how database concepts are organized. The schema
matching from a source schema S into a target schema T defines concepts in T on
terms of the
concepts in S. This work describes a software tool that helps implement
instance-based schema matching techniques for OWL dialects that depend on the
definition of similarity functions to evaluate the semantic proximity of
elements
from two different schemas. The tool is designed to accommodate different
similarity functions and distinct matching algorithms, thereby facilitating
experimenting with alternative matching configurations.
[10_MSc_azevedo]
Roberto Gerson de Albuquerque AZEVEDO.
Suporte ao controle e à apresentação de
objetos de mídia tridimensionais em NCL. [Title
in English: Supporting three-dimensional media object control and presentation
in NCL].
M.Sc. Diss. Port. Presentation: 27/08/10. 113 p. Advisor: Luiz Fernando Gomes
Soares.
Abstract: The world where we live in is physically formed by a
three-dimensional space and it is natural human being wants to represent it as
faithful as possible. In Web for example there are a lot of efforts in order to
support the creation of interactive 3D applications, most of them based on scene
graphs and route graphs. Scene graphs have become a standard for modeling
spatial 3D applications from a hierarchical and declarative approach. In order
to represent the scene graphs behavior, route graphs or imperative languages are
more commonly used. As regards Interactive Digital Television (iDTV), on the
other hand, there is still a lot of work to be done. Nested Context Language (NCL)
is the standard declarative language for Terrestrial Digital Television Systems
(ISDB-T) and IPTV (ITU-T), which allows the hypermedia document authoring
through a simple and expressiveness approach. Even though NCL does not restrict
any type of media object, in its current version (3.0), NCL treats only
two-dimensional objects, relating them temporally and spatially. Given the
importance of NCL in the iDTV scenario, this research aims discuss how it can
also control three-dimensional objects. As a special case, this work discusses
how NCL can control composite object behavior represented by scene graphs,
discussing its advantages and disadvantages. In order to test what was proposed,
an X3D (which is an ISO standard language based on scene graphs) player was
incorporated to the Ginga-NCL reference implementation, which is responsible to
play NCL applications. Additionally, it is also proposed a new set of NCL events,
in order to reflect three-dimensional events, and the incorporation of NCL
regions based on three-dimensional geometric, so that the presentation of
two-dimensional media objects over 3D object’s surfaces is possible.
[10_MSc_silva]
Rodrigo Marques Almeida da SILVA.
Simulação e visualização de oceano em tempo
real utilizando a GPU. [Title
in English: Real-time ocean simulation and visualization using GPU].
M.Sc. Diss. Port. Presentation: 17/03/10. 156 p. Advisor: Bruno Feijó.
Abstract:
he synthesis of realistic natural scenes in real time is one of the most
important research areas for applications in games and simulators. In the
beginning of real-time computer graphics, most of the applications treated water
surface as a texture plane. this approach produces a very low realistic
rendering of the water surface and does not reproduce the correct hydrodynamics
behaviour of the water. So, a lot of research groups developed techniques for
realistic water rendering, most of them for the off-line processes and a few for
real-time use. However, current improvementson hardware performance allow the
usage of traditional off-line techniques for real-time proposes, but there is no
research work that describes these techniques and makes a comparative analysis
of them.
Then, without this comparative analysis, it is very difficult to choose the best
technique for a specific hardware or to decide if a particular technique
provides the simulation control that a certain application needs. In this
context, the present research work analyses the most important techniques for
real-time ocean water simulation and visualization using the graphics processor
unit as a main processor. moreover, it makes a performance comparative analysis
of each technique and analyses the pros and cons of them. Furthermore, some
classic off-line methods are adapted for GPU use.
[10_PhD_rocha]
Roger ROCHA.
Petroleum supply planning: models, reformulations and algorithms. [Title
in Portuguese: Planejamento do suprimento de petróleo: modelos, reformulações e
algoritmos].
Ph.D. Thesis. Eng. Presentation: 24/05/10. 124 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão.
Abstract: The Petroleum Supply Planning
activity is an important link for the integration of the Petroleum Supply Chain
at Petrobras as it is the responsible for refining the strategic supply planning
information to be used at the operational level. In this thesis we set the
ground for understanding this important problem and we propose a mathematical
model to solve it. Although the solvers in the last decade have evolved
enormously, for this particular application we cannot get solutions with
satisfactory quality in reasonable computational time with only the initial
proposed model. This directed the line of research of this thesis into
investigating, in detail, the structure of this problem in order to find more
suitable reformulations and algorithms to tackle it. Our primary goal is
to solve efficiently the petroleum supply planning problem at Petrobras.
Nevertheless as a by-product of this endeavor, we propose a novel decomposition
algorithm and reformulations based on cascading knapsack structures that turn
out to be applicable in a wide range of problems. Concerning the
achievement of the main objective, we obtain good results for all instances we
tested. We show that novel decomposition algorithm is the most fitted
method to solve the petroleum supply planning problem if we consider more than
two tankers to offload each platform. In the case of one or two tankers to
offload each platform, the hull relaxation formulation based on the cascading
knapsack structure introduced after an inventory reformulation at platforms is
the best option if one is to solve this problem. For the real application,
these solution alternatives allow to implement a general algorithm that
automatically switches to the best solution option depending on the structure of
the problem. For the mixed situation, i.e., number of tanker varying from
one to four, one can use more than one approaches in parallel and take the
fastes or the best result obtained. This model is being tested at Petrbras
and is showing to be an effective tool to help integrate its petroleum supply
chain as well as to do what-if analysis to look for alternative solutions never
thought before.
[10_PhD_costa]
Romualdo Monteiro de Resende COSTA. Controle do sincronismo temporal de
aplicações hipermídia. [Title
in English: Temporal synchronism control of hypermedia applications].
Ph.D. Thesis. Port. Presentation: 30/08/10. 160 p. Advisor: Luiz Fernando Gomes
Soares.
Abstract:
Synchronization control in hypermedia applications is one of the most important
requirements in the presentation of these applications. To assure highquality
presentations, it is necessary to know when events, produced during an
application execution, occur in time. By means of this information, it is
possible to predict undesirable delays and to forestall actions needed to avoid
presentation adjustments. This thesis discusses how event occurrences can be
predicted in the course of an application presentation. In order to assist this
goal, a temporal graph, built up from the application specification, is proposed.
This graph, called Hypermedia Temporal Graph - HTG, can be used in the temporal
control of hypermedia applications, from the media transport system up to their
presentation at client sides. Besides the quality of service control, this
thesis elucidates other advantages that come from the synchronization
management. Among them is allowing for presentations to be started at any moment
in time of their life cycle, and allowing for presentations to be moved backward
and forward up to a desirable presentation moment in time. Another advantage,
also dealt with in this thesis, is the support to distributing parts of an
application to different devices in charge of their presentations, without
causing any hazard to the whole application temporal synchronism. Finally, this
thesis also proposes how editions can be made over the HTG, and, therefore, how
application control can be modified during runtime.
[10_PhD_medeiros]
Sérgio Queiroz de MEDEIROS.
Correspondência entre PEGs e classes de gramáticas
livres de contexto. [Title
in English: Correspondence between PEGs and classes of context-free grammars].
Ph.D. Thesis. Port. Presentation: 30/08/10. 86 p. Advisor: Roberto
Ierusalimschy.
Abstract:
Parsing Expression Grammars (PEGs) are a formalism that allow us to describe
languages and that has as its distinguishing feature the use of an ordered
choice operator. The class of languages described by PEGs properly contains all
deterministic context-free languages. In this thesis we discuss the
correspondence between PEGs and two other formalisms used to describe languages:
regular expressions and Context-Free Grammars (CFGs). We present a new
formalization of regular expressions that uses natural semantics and we show a
transformation to convert a regular expression into a PEG that describes the
same language; this transformation can be easily adapted to accommodate several
extensions used by regular expression libraries (e.g., lazy repetition and
independent subpatterns). We also present a new formalization of CFGs that uses
natural semantics and we show the correspondence between right linear CFGs and
equivalent PEGs. Moreover, we show that LL(1) grammars with a minor restriction
define the same language when interpreted as a CFG and when interpreted as a PEG.
Finally, we show how to transform strong-LL(k) CFGs into PEGs that are
equivalent.
[10_PhD_lauschner]
Tanara LAUSCHNER.
Modelagem de restrições de esquemas medianos. [Title in
English: Modeling the mediated schema constraints] Ph.D. Thesis. Port.
Presentation: 09/08/10. 80 p. Advisor: Marco Antonio Casanova
Abstract: Data integration refers to the problem of combining data stored in
different sources, providing users with a unified view of the data. Queries are
then expressed in terms of a global or mediated schema, which should include
integrity constraints that contribute to a correct understanding of what the
semantics of the data sources have in common. This thesis addresses the problem
of modeling the constraints of a mediated schema from the imported
schemas constraints and mappings. It argues that the constraints should be
modeled as the greatest lower bound of the constraints of the export schemas,
after appropriate translation to a common vocabulary. This assures that users of
the mediated schema will correctly interpret query results. For a rich family of
constraints, it shows how to efficiently compute the greatest lower bound of
sets of constraints.
[10_MSc_araujo]
Thiago Pinheiro de ARAÚJO.
SDiff: uma ferramenta para comparação de documentos com base nas suas estruturas
sintáticas. [Title
in English: SDiff: a comparison tool based in syntactical document structure].
M.Sc. Diss. Port. Presentation: 08/03/10. 95 p. Advisor: Arndt von Staa.
Abstract: Associated with each version control system there's comparison
tool for extracting the differences between two versions of a document. These
tools tend to make a comparison based on textual information from documents, in
which the indivisible element is the line or word. But the content versioned is
usually highly structured (for example, programming languages) and the use of
this mechanism can disrespect syntactical limits and other properties of the
document, becoming difficult to interpret what really changed. In this work we
created a tool to identify differences between two versions of a document using
a comparison mechanism based on the syntactic structure. Thus, it is possible to
identify more precisely the relevant differences to the reader, reducing the
effort to understand the semantics of the changes. The tool can support
different types of documents by implementing components that interprets the
desired syntax. The example syntax component implemented in this work deals with
the syntax of the programming language C++.
[10_MSc_silva]
Thuener Armando da SILVA.
Estudo experimental de técnicas para otimização de
carteiras. [Title
in English: Experimental study of techniques for portfolio optimization].
M.Sc. Diss. Port. Presentation: 04/06/10. 126 p. Advisor: Eduardo Sany Laber.
Abstract:
Markowitz in 1959 structured the foundations of the modern portfolio theory
through the analysis of risk and return of assets. Now, after five decades his
theory is still widely used as a basis for building portfolios. In this thesis
we investigate variations of the Markowitz model for portfolio selection from
both a theoretical and practical point of view. We analyzed the impact of
different methods for the prediction of risk and return, transaction costs,
target risk and frequency of revision of the portfolio. In order to test and
analyze the strategies studied we implemented a robust and versatile simulator
and created a database with daily data of 41 assets from the Brazilian stock
exchange, CDI and IBOVESPA.
[10_MSc_sangiorgi]
Ugo Braga
SANGIORGI.
Apoiando o projeto e avaliação da interação e da interface: um estudo
e uma ferramenta. [Title
in English: Supporting interaction and interface design: a case study and a tool].
M.Sc. Diss. Port. Presentation: 19/03/10. 81 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract: Nowadays, with the growing popularity of the
Internet and mobile devices, interactive systems have increasingly gained ground
among different people, with different cultural backgrounds. However, the design
of the interaction that takes place between the users and those systems is not
properly supported by tools nor notations, turning the intercative experience
into a mere consequence of the internal functions of the systems - in which
errors
are treated as exceptions and must therefore be avoided, instead of properly
supported. This work presents a study on how systems design migh be supported by
a model that combines interaction and interface design, and a tool to support
this modeling and to generate prototypes, supporting the designer's reflection
about the interactive solution being created. The MoLIC language is used to
represent the systems behaior and sketches are used to represent the interface
in a combined model. The prototypes are generated from a sequence of sketches,
guided by the interaction model. An exploratory study was conducted in order to
gather feedback about the proposed approach and to investigate the feasibility
of the integrated project of interaction and interface.
[10_PhD_reis]
Valéria Quadros dos REIS.
Um estudo sobre reserva de recursos computacionais no
nível do usuário. [Title in English: A study on computacional resource
reservation at user level] Ph.D. Thesis. Port. Presentation: 17/05/10 100 p.
Advisor: Renato Fontoura de Gusmão Cerqueira
Abstract: The way computing
is done today is changing as a result of the ever increasing processing, storage,
and communication capacities of modern computer hardware. Resource sharing
scenarios, in which a physical server is shared for different applications, are
becoming much more common. These scenarios require special attention to
guarantee that the performance isolation of each application is carried out
exactly as if it were being locally executed. Based on this situation, the
present work aims at investigating techniques for providing resource
reservations and thus guaranteeing quality of service (QoS) and performance
isolation for applications. Considering environments in which the use of
operating system extensions or the use of virtualization are unreasonable or
inappropriate, this work investigates the viability and effectiveness of
reservations done at user level, that is, reservations guaranteed with no
operating system kernel instrumentation. For this purpose, we have implemented a
tool that limits and ensures the proper usage of processing and disk bandwidth
resources through the exclusive use of Linux Operating system primitives, which,
among other functions, permits easy scheduling policy extensions. this feature
enables flexibility in how resources are shared among distinct processes.
Through tool usage analysis, we have identified the advantages and limitations
of the techniques used. For a case study, aiming to achieve some specific
performance goals, we have established parameter reservations for a three-tier
application. We were able to verify that, even for complex applications, simple
methodologies like linear regressions are capable of predicting resource usage
with a low margin of error.