Theses and Dissertations
2013
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2012. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 31/DECEMBER/2013
** Electronic version not available yet.
[13_MSc_liborio]
Airton José Araujo LIBÓRIO.
Suporte à evolução
arquitetural de sistemas distribuídos baseados em componentes
de software. [Title in English: Support for architectural evolution in
component-based distributed systems].
M.Sc. Diss.Port. Presentation: 16/05/2013. 95 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract:The nature of some software systems determine that they run without
interruption. Furthermore, many
software systems are constantly subject to change for reasons that include, but
are not limited to, infrastructure
changes, bug fixes, addition of functionalities, and changes in the domain logic.
Dynamic software evolution consists into changing application during execution
without stopping them, keeping them available even when applying these
modifications. Component-based distributed systems allows decomposing software
into clearly separated entities. In such cases, evolution can be summarized to
removal, addition and modification of such entities, and if such activities can
be performed while the application is executing, dynamic adaptation is achieved.
In this work, we’ve investigated an approach that aims to allow manipulation of
distributed software architectures developed over the SCS middleware, in order
to minimize system disruption while certain adaptations are deployed. The
mechanism was tested in an already consolidated distributed system, the CAS,
which consists of an extensible recording infrastructure that supports automatic
capture and access of distributed medias.
[13_MSc_silva]
Alexandre Leite SILVA.
Reúso de estratégias sensíveis a domínio para detecção de anomalias de código:
um estudo de múltiplos casos. [Title in Portuguese:
Reuse of domain-sensitive strategies for detecting code anomalies: a multi-case
study]. M.Sc. Diss. Eng. Presentation: 08/08/13. 84 p. Advisors:
Alessandro Fabrício Garcia and Elder José Reioli Cirilo.
Abstract: To prevent quality decay, detection strategies are reused to
identify symptoms of maintainability problems in the entire program. A
detection strategy is a heuristic composed by the following elements: software
metrics, thresholds and logical operators combining them. The adoption of
detection strategies is largely dependent on their reuse across the portfolio of
the organizations software projects. If developers need to define or
tailor those strategy elements to each project, their use will become
time-consuming and negleted. Nevertheless, there is no evidence about
efficient reuse of detection strategies accross multiple software projects.
Therefore, we conduct and industry multi-project study to evaluate the
reusability of detection strategies in a critical domain. We assessed the
degree of accurate reuse of previously-proposed detection strategies based on
judgement of domain specialists. The study revealed that even though the
reuse of strategies in a specific domain should be encouraged, their accuracy
and reuse were significantly improved when the metrics, thresholds and logical
operators were tailored to each recurring concern of the domain.
[13_MSc_porto]
Alexandre Valdetaro PORTO.
A non-intrusive solution for
didtributed visualization and colaboration in a visualizer. [Title in Portuguese:
Uma solução não intrusiva para visualização distribuída e colaboração em um
visualizador]. M.Sc. Diss. Eng. Presentation: 26/03/13. 55 p. Advisor:
Alberto Barbosa Raposo.
Abstract: In this work, we present the design and implementation of
distributed visualization and collaboration
for a, immersive 3D visualizer. We start by presenting, on a high abstraction
level, our design of a generic visualizer. The design follows an MVC approach,
isolating all the business objects in the lowest level of the application,
making it modular and extensible, therefore providing an easier prototyping of
functionality and the isolation of complex business logic algorithms. This
design as a solution came from the real necessity of a visualizer with a
monolithic implementation, whose maintainability and improvement are impaired
due to a high complexity because of the coupling between the business logic and
the diverse visualization and distribution algorithms. Our hope is that our
design can be reused as an inspiration for other visualizers that wish to reduce
the complexity and cost of the development of new business functionality. On top
of this design, then, we present the detailed design and implementation of a
module that provides distributed visualization and collaboration to the
visualizer. This module is non intrusive because it requires no changes to the
application
architecture, and the application can become distributed just by the inclusion
of the module. This module serves as a proof of concept for our design as it
solves a classic problem of distribution and synchronism in a visualizer in a
way that is transparent to the business logic. Also, we implemented an example
visualizer with our design and our proposed module, where we veri_ed both the
synchronism of the distributed visualization and the consistency of the
collaboration among multiple nodes, we also evaluated the performance impact
caused by
the distributed visualization.
[13_MSc_saettler]
Aline
Medeiros SAETTLER.
On the simultaneous minimization of worst testing cost and expected testing cost
with
decision
trees.
[Title in Portuguese:
Minimização simultânea do pior custo e do custo médio em árvores de decisão]. M.Sc. Diss. Eng. Presentation:
30/07/13. 51 p. Advisor:
Eduardo Sany Laber.
Abstract:
The problem
of minimizing the cost of evaluating a discrete function by sequentially reading
its variables is a problem that arises in several applications, among them
automatic diagnosis design and active learning. In this problem, each variable
of the function is associated with a cost, that we have to pay in order to check
its value. In addition, there may exist a probability distribution associated
with the points where the function is defined. Most of the work in the area has
focussed either on the minimization of the maximum cost or on the minimization
of the expected cost spent to evaluate the function. In this dissertation, we
show how to obtain an
O(log
n)
approximation with respect to the worst case minimization (the best possible
approximation under the assumption that
P 6=
NP).
We also show a polynomial time procedure for evaluate a function that
simultaneously optimizes both the worst and the expected costs.
[13_PhD_almeida]
Ana
Carolina Brito de ALMEIDA.
Framework para apoiar a sintonia fina de
banco de dados.
[Title in English: Framework to support
database tuning]. M.Sc. Diss. Eng. Presentation: 13/09/13. 176 p. Advisor:
Sérgio Lifschitz.
Abstract: There is a strong demand for automation of Database Management
Systems (DBMS) tasks, as those related to self-management and self-tuning
activities. However, whenever automatic decisions are made, there is also a lack
of clearness about the considered decisions and actions. This thesis proposes a
framework to support the DBA (and possibly other database users) on choices
concerning tuning activities. This research work includes the proposal of an
ontology for (autonomous or not) database tuning that enables a formal approach
for decisions and inferences. The goals are to offer transparency and confidence
on the available tuning alternatives with respect to the possible DBMS scenarios
through a concrete justification about the decisions that are made. Moreover,
new tuning practices may be obtained automatically as soon as new rules,
concepts and practices are known. Finally, our approach enables an actual
combination, at a high level of abstraction, of distinct database tuning
strategies.
[13_MSc_bueno]
André Luis Cavalcanti BUENO.
Resolução
de sistemas de equações lineares de grande porte em clusters multi-GPU
utilizando o método do gradiente conjugado em OpenCL. [Title
in English: Solving large systems of linear equations on multi-GPU clusters using
the conjugate gradient method in OpenCLTM]. M.Sc. Diss. Port. Presentation:
25/03/13. 64 p. Advisor: Noemi da La Roque Rodrigues.
Abstract: The
process of modeling problems in the engineering fields tends to produce
substantiously large systems of sparse linear equations. Extensive research has
been done to devise methods to solve these systems. This thesis explores the
computational potential of multiple GPUs, through the use of the OpenCL
tecnology, aiming to tackle the solution of large systems of sparse linear
equations. In the proposed methodology, the conjugate gradient method is
subdivided into kernels, which are delegated to multiple GPUs. In order to
achieve an efficient method, it was necessary to understand how the GPUs’
architecture communicates with OpenCL.
[13_MSc_franceschin]
Bernardo
Bianchi FRANCESCHIN. Visualização de
seções de corte arbitrarias de malhas não
estruturadas. [Title in English:
Visualization
of arbitrary cross section of unstructured meshes]. M.Sc. Diss. Port. Presentation: 09/04/2013.
53 p. Advisor: Waldemar Celes Filho.
Abstract:
For the visualization of scalar fields in volume data, the use of cross sections
is an effective technique to inspect the field variation inside the domain. The
technique consists in mapping, on the cross section surfaces, a colormap that
represents the scalar field on the surface-volume intersection. In this work, we
propose an efficient method for mapping scalar fields of unstructured meshes on
arbitrary cross sections. It is a direct-rendering method (the intersection of
the surface and the model is not extracted) that uses GPU to ensure efficiency.
The basic idea is to use the graphics rasterizer to generate the fragments of
the cross-section surface and to compute the intersection of each fragment with
the model. For this, it is necessary to test the location of each fragment with
respect to the unstructured mesh in an efficient way. As acceleration data
structure, we tested three variations of regular grids to store the elements
(cells) of the mesh, and each element is represented by the list of face planes,
easing the in-out test between fragments and elements. Once the element that
contains the fragment is determined, it is applied procedures to interpolate the
scalar field and to check if the fragment is close to the element boundary, to
reveal the mesh wireframe on the surface. Achieved results demonstrate the
effectiveness and the efficiency of the proposed method.
[13_MSc_miranda] **
Bernardo Evangelho MIRANDA. Conceitos centrais e
componentizção de diagramas de classe UML representados em grafo. [Title in
English: Core concepts and componetization of UML class diagrams represented in
graph]. M.Sc. Diss. Port. Presentation: 05/04/2013. 101 p. Advisor: Arndt Von
Staa.
Abstract: The Goal of this dissertation is to develop a web application that
explores different analysis of UML diagrams. One of the main features of this
tool is the compatibility with other tools that create UML diagrams. In order to
do that, we implemented a generic XML parser that import class diagrams and
provides as extra information regarding this diagram. The first study is the
identification of the classes that can be considered "core Concepts" (defining
the important classes of the system). The other study performs graph clustering,
aiming to create groups of classes, making it possible to generate components.
Finally, we discuss estimates of importance and degree of cohesion, as well as
size and coupling metrics of the diagram as a whole and the generated components.
[13_MSc_fabri]
Bruno Ferreira FABRI.
FEAF: uma infraestrutura para análise
da evolução das características de uma linha de
produto de software. [Title in English: FEAF: An infrastructure for analyzing
the evolution of the features
in a Software Product Line]. M.Sc. Diss.Port. Presentation:18/04/13. 97 p.
Advisor:Carlos José Pereira de Lucena.
Abstract: Software Products Lines (SPL) is a software engineering approach
to developing software system families that share common features and differ in
other features according to the requested software systems. The adoption of the
SPL approach can promote several benefits such as cost reduction, product
quality, productivity and time to market. As with any approach to software
development, the activities of software evolution should be seen as something
inevitable, constant and routine. Within the scenario of development of SPLs,
evolution activities are driven by changes in its features over the releases. As
such, the development of SPLs imposes new challenges to the activities of
analyzing and comprehension the evolution of their features, considering the
various releases of an SPL. Recent research works propose visual strategies with
automated support by visualization tools. Such approaches have limitations since
some do not provides support for a comparison of features in different releases
of an SPL and others do not support the concept of features present in the SPL.
This paper proposes the FEAF, an infrastructure to support the construction of
tools for analyzing and comprehending the evolution of features in different
releases of an SPL. Based on the proposed infrastructure, we developed a visual
tool, which assists with the analysis and understanding of the evolution of the
features of an SPL, called FEACP. It provides a visualization strategy that uses
two light views based on graph representation. The tool was evaluated through a
controlled experiment that compares our visualization strategy with the
visualization strategy of Source Miner Evolution.
[13_MSc_ourofino]
Carla Gonçalves OUROFINO.
Materialização e manutenção de
ligações owl:sameAs. [Title in English: Materialization and maintenance
of owl:sameAs links.] M.Sc. Diss.Port. Presentation: 23/08/13. 97 p. Advisor:Marco
Antonio Casanova.
Abstract: The Web of Data has grown significantly in recent years, not only
in the amount of data but also in the number of data sources. In parallel with
this tendency, owl:sameAs links have been increasingly used to connect
equivalent data published by different sources. As a consequence, it becomes
necessary to have a routine for the identification and maintenance of these
connections. In order to automate this task, we have developed the "MsA
Framework - sameAs Materialization" to materialize and recompute owl:sameAs
links between local databases and data published on the Web. These connections,
once identified, are materialized along with the local data and recomputed only
when necessary. To achieve this goal, the tool monitors the operations
(insertion, update and deletion) performed on local and remote records, and for
each type of operation it implements
[13_PhD_batista]
Carlos Eduardo Coelho Freire BATISTA.
GINGA-MD: uma
plataforma para suporte à execucao de aplicações hipermídia multi-dispositivo
baseada em NCL. [Title in English: GINGA-MD: An NCL Based Platform for
Supporting the Execution of Multi-Device Hypermedia Applications]. Ph.D. Thesis.
Port. Presentation: 28/02/13.
160 p. Advisor: Luiz Fernando Gomes Soares.
Abstract: The increase of digital media formats fostered the creation of
interactive multimedia applications, including the development of multi-user
interactive scenarios. There are many spaces where digital interactive
multimedia artifacts are consumed by groups of people, such as homes with
Digital TV, theaters with Digital Cinema and conferences with interactive
lecture presentations. In this thesis, a platform to support the execution of
distributed hypermedia applications is proposed aiming at these spaces of
collective digital multimedia consumption. The proposed platform uses the NCL
language as the application description format, since it is a hypermedia
glue-language that supports the concept of multi-device applications following a
declarative abstraction level. The platform establishes a reference software
architecture, defining mechanisms and interfaces for heterogeneous device
integration. A prototype is implemented and validated against different usage
scenarios, in which hypermedia applications use media resources coming from and
going to multiple devices.
[13_MSc_medeiros]
Daniel Pires de
Sá MEDEIROS.
Uma ferramenta de interação 3D para ambientes virtuais de engenharia utilizando
dispositivos
móveis. [Title in
English:
A 3D Interaction tool for engineering virtual environments using mobile devices]. M.Sc. Diss. Port. Presentation:
17/09/2013. 80 p. Advisor:
Alberto
Barbosa Raposo.
Abstract:
Interaction in engineering virtual environments is characterized by the
necessity of the high precision level needed for the execution of specific tasks
for this kind of environment. Generally this kind of task uses specific
interaction devices with 4 or more degrees of freedom (DOF). Current
applications involving 3D interaction use interaction devices for object
modelling or for the implementation of navigation, selection and manipulation
tecniques in a virtual environment. A related problem is the necessity of
controlling tasks that are naturally non-immersive, such as symbolic input
(e.g., text, photos). Another problem is the large learning curve to handle such
non-conventional devices. The addition of sensors and the popularization of
smartphones and tablets, allowed the use of such devices in virtual engineering
environments. These devices, besides their popularity and sensors, differ by the
possibility of including additional information and performing naturally
non-immersive tasks. This work presents a 3D interaction tablet-based tool,
which allows the aggregation of all major 3D interaction tasks, such as
navigation, selection, manipulation, system control and symbolic input. To
evaluate the proposed tool we used the SimUEP-Ambsim application, a training
simulator for oil and gas platforms that has the complexity needed and allows
the use of all
techniques implemented.
[13_MSc_rodrigues]
Danilo Moret RODRIGUES.
Distributed RDF graph keyword
search. [Title in Portuguese: Busca distribuída em grafo RDF por palavra-chave].
M.Sc. Diss.Eng. Presentation: 25/03/13 66 p. Advisor:Karin Koogan Breitmam.
Abstract: The goal of this dissertation is to improve RDF keyword search. We
propose a scalable approach, based on a tensor representation that allows for
distributed storage, and thus the use of parallel techniques to speed up the
search over large linked data sets, in particular those published as Linked
Data. An unprecedented amount of information is becoming available following the
principles of Linked Data, forming what is called the Web of Data. This
information, typically codified as RDF subject-predicate-object triples, is
commonly abstracted as a graph which subjects and objects are nodes, and
predicates are edges connecting them. As a consequence of the widespread
adoption of search engines on the World Wide Web, users are familiar with
keyword search. For RDF graphs, however, extracting a coherent subset of data
graphs to enrich search results is a time consuming and expensive task, and it
is expected to be executed on-the-fly at user prompt. The dissertation's goal is
to handle this problem. A recent proposal has been made to index RDF graphs as a
sparse matrix with the pre-computed information necessary for faster retrieval
of sub-graphs, and the use of tensor-based queries over the sparse matrix. The
tensor approach can leverage modern distributed computing techniques, e.g.,
nonrelational database sharding and the MapReduce model. In this dissertation,
we propose a design and explore the viability of the tensor-based approach to
build a distributed datastore and speed up keyword search with a parallel
approach.
[13_PhD_carvalho]
Dárlinton Barbosa Feres CARVALHO.
Combining a process
and tools to support the analysis of online
communities applied to healthcare. [Title in Portuguese: Combinando um processo
e ferramentas para apoiar a análise de comunidade online aplicados à área de
saúde]. Ph.D. Thesis. Eng. Presentation: 22/03/13. 82 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: This research thesis is aiming to exploit valuable social media,
especially those available in online
communities of social network sites, in order to perform social studies about
healthcare issues. Based on a practical approach,a process was defined to
conduct such studies. This process relied on tailored computational tools to
provide support for specific tasks such as content retrieval, selection, and
analysis. Two tools that stand out are presented because of their utility and
the complexity of the process in which their development was based on. The first
tool, for the benefit of online community analysis, is the Community Association
Map, a process developed to support experts in understanding users’ interests
based on their associations within their communities. Our second tool (TorchSR)
aims to aid analysts in the selection of discussions from online forums
to be manually analyzed by (qualitative) research techniques (e.g. content and
discourse analysis). This task, which was defined as solving the content
selection problem, was tackled with a tool based on unsupervised machine
learning techniques, such as hierarchical clustering. An exploratory study case
shows that TorchSR helps analysts in dealing with the problem. The proposed
process was employed in two studies about relevant healthcare issues (i.e.
hepatitis C and drug abuse) which resulted in interesting findings in the field
of public health. In conclusion, this thesis presents a practical application of
computational social science to the field of health, through development of a
process and tools used to support analysts and improve its application.
[13_MSc_ferreira]
Eduardo de Oliveira FERREIRA. Geração automática de suítes de teste da interface
com usuário a partir de casos de uso. [Title in English:
Automatic generation of user interface test suites specifiedby use cases]. M.Sc. Diss.Port. Presentation:
23/08/13. 118 p. Advisor:
Arndt von Staa.
Abstract: It is expected that the development of test suites from models can
contribute substantially to reducing the human effort and to increase the
effectiveness of the generated tests. Means for tests’ effectiveness (ideal) the
percentage of existing defects found by these tests. Most of these techniques is
based on state machines and mostly directed to testing the functionality.
However, there is a need to be able to test highly interactive systems, such as
smartphones and tablets, from a description of its human computer interface. The
goal of the dissertation is to make a first evaluation of a technique aimed to
generate test suites for test of human computer graphic interface. For this
purpose was developed and evaluated its effectiveness, a tool called Easy, using
use cases tabular and state machine for the automatic generation of the suite
tests. The use cases are described in natural language restricted. From this
description, the tool builds a state machine, and then uses this to generate
scenarios. By construction scenarios will be in accordance with the use cases.
Each scenario corresponds to a test case. The scenarios are presented to the
user in natural language, allowing the visualization of them before the
generation of the final scripts tests. The generated scripts are intended to a
running automated tool geared to testing graphical interfaces. In this work, we
used the UI Automation tool, responsible for running tests on applications for
the iOS, operational system for iPhones, iPads and iPod touchs. The
effectiveness of the procedure was evaluated in a real application, available in
the online store applications App Store. In addition, HCI tests were performed
in order to evaluate the influence on the cost of production of the test suite.
[13_PhD_almentero]
Eduardo Kinder ALMENTERO.
Dos requisitos ao código: um processo para
desenvolvimento de software mais transparente. [Title in English:
From requirements to code: a process to develop more transparent software]. M.Sc. Diss.Port. Presentation:
12/12/13. 183 p. Advisor:
Julio Cesar Sampaio do Prado Leite.
Abstract: Transparency is a keyword present in different contexts such as
the economic and the political ones, and, currently, one of the new contexts, in
which it stands, is software. Free (open source) software is a good example of
transparency, where the great advantage is that one can access the source code
and then choose the characteristics he/she wants, but, in this way, we will be
serving only those who understand the source code. Understanding software source
code can be an arduous task, especially if no technique has been used for
facilitate reading. In this work we explore a method for developing free
software based on the use of scenarios. The result of applying this method is a
single document, the source code, in which the scenarios will be integrated
within the code, making it easier to read and understand, thus bringing more
transparency to the software. This method was refined during its application to
reengineer the C&L software. In order to produce additional documentation,
besides the scenarios embedded in the code, we used the LEL (Language Extended
Lexicon) technique to map the namespace of the new C&L.
[13_MSc_silva]
Eduardo Ribeiro SILVA.
Uso de ambientes imersivos para colaborção com usuários
remotos não imersos.
[Title in English: Use of immersive environments in collaboration with remote
non-immerse users]. M.Sc. Diss.Port. Presentation: 10/04/13. 100 p. Advisor:
Alberto Barbosa Raposo.
Abstract: Throughout the years, many studies have explored the potential of
virtual reality technologies to support collaborative work, particularly for
training and simulation applications. By using a virtual environment, it is
possible to create applications for simulation and training capable of
representing real scenes, which also allow more flexibility to make structural
changes to objects and other aspects of the virtual scenario to simulate, for
instance, emergency situations and accidents, which are difficult to simulate in
a real scenario. This work studies the use of collaboration in immersive
environments to support user training. The 3C collaboration model
(communication, cooperation and coordination) was used to define the model and
collaboration tools in the development of a collaborative virtual reality
application. The 3C model defines that in a collaborative environment
individuals need to exchange information (communication) and organize (coordination)
so they can work together in a shared environment (cooperation). For this work
we implemented a collaborative training program between a user immersed in a
CAVE and a remote user using a desktop computer. We implement techniques to help
users accomplish collaboration: waypointing and highlight. To assist wayfinding,
which is the cognitive process of defining a path through an environment using
and acquiring spatial knowledge, aided by natural and artificial slopes, it was
allowed to the desktop user to create waypoints, artificial slopes that allowed
the definition of a path to the task goal. The highlight consists of the
application of a silhouette on objects, highlighting them, allowing the users to
easily identify the objects of interest. A series of tests was developed with
the main objective of evaluating the heterogeneous scenario for collaboration,
checking aspects such as: importance and effectiveness of audio communication,
text communication need and the role of auxiliary techniques (highlight and
waypointing) as eventual substitutes or complements techniques of communication
in tasks based on real situations with different levels of complexity.
[13_MSc_castillapenaranda] **
Fabian Arturo CASTILLA PEÑARANDA.
Vehicle routing problems with
time windows and exact synchronization constraints. [Title in
Portuguese: Problemas de roteamento de veículos com janelas de tempo e
sincronização exata de operação]. M.Sc. Diss. Eng. Presentation: 10/06/2013. 68 p. Advisor:
Marcus
Vinicius Soledade Poggi de Aragão.
Abstract:
This dissertation addresses a generalization of the vehicle routing problem (VRP)
that arises in real life applications in ports and mine operations. In this VRP
variant, each customer may demand di_erent types of vehicles to perform a task
collaboratively. Vehicles are allowed to wait at the locations but they must
start operating at the same time. The objective is to route the available
vehicles while maximizing the (weighted) sum of served customers and minimizing
the total distance traveled. The specific case where all customers must be
served while minimizing the total distance traveled is the central problem here
studied. This special case can be viewed as a straightforward generalization of,
a well known and more speci_c routing problem, the VRP with time windows (VRTPTW)
where the capacity of the vehicles is su_ciently large. We support this narrower
scope by stating that it allows a clear comparison of the problem hardness by
its relation to the VRPTW. Sticking to the classfication of synchronization in
vehicle routing proposed by (DREXL, 2012) we named this problem as the Vehicle
Routing Problem with Time Windows and Exact Operation Synchronization (VRPTWEOS).
In this work, a formal de_nition for the VRPTWEOS is provided. Integer
programming models for this problem are proposed and analyzed. Furthermore, we
propose a solution method based on the Dantzig-Wolfe decomposition for which
exact and aproximated resolution algorithms are described. In order to test the
performance of those algorithms, a group of benchmark instances for the VRPTWEOS
was created on top of the Solomon benchmark for the VRPTW. The method used to
create the benchmark instances is described in detail. Computational experiments
over the mentioned set of instances showed that the proposed solution approach
is a promising alternative for solving the VRPTWEOS.
[13_MSc_simoes]
Fabiana Pedreira SIMÕES.
Supporting
end user reporting of HCI issues in open source software projects. [Title in
Portuguese: Apoiando o relato de
problemas de IHC em projetos de software open source]. M.Sc. Diss.
Eng. Presentation: 21/08/2013. 144 p. Advisor: Simone Diniz Junqueira Barbosa.
Abstract: Empowering end users to
proactively contribute to OSS by reporting HCI issues not only represents a
potential approach to solving HCI problems in OSS projects, but it also fits the
Open Source values and ideology. By referring to the end users’ personal
experiences and feedback reports, designers in OSS communities may build their
design rationale not only in an open and transparent manner, but also in such a
way that end users relate to the values embedded in the community. This study
aims to contribute to the existing literature by exploring (a) how issue reports
fit and influence OSS designers' activities, (b) what the information needs of
designers in OSS projects are, and (c) how to support users on the task of
creating HCI issues reports that meet those needs. In order to collect data
about questions (a) and (b), we conducted interviews with four designers
contributing to OSS projects, and qualitatively evaluated a set of 547 bugs
reported under HCI-related keywords. Using this data and based on Semiotic
Engineering, we designed a form for users to report HCI issues. To investigate
how well this form communicates the information needs of OSS designers and
address question (c), we performed a study in which participants were invited to
report HCI issues through the designed form.
[13_MSc_bacelar]
Abstract: To achieve high scalability in distributed simulation is
necessary to avoid communication bottlenecks. Messages between machines are
necessary when an agent kept in a specific computer needs to interact with
elements kept in another computer. This work presents an approach to
dynamically partitioning a distributed simulation keeping each agent in the same
network node where are the elements more accessed by it, reducing the
communication cost between the network computers. To reach this objective,
we are using the concept of interest management, which aims to provide to an
agent only the smallest set of information necessary to allow it to interact
with the environment in a coherent way. To
illustrate the proposed solution a case study was
developed comprehending a distributed simulation representing an oil scenario.
[13_MSc_silva]
Fernando Freitas SILVA.
Uma nova abordagem
de mineração de repositórios de software utilizando ferramentas da Web semântica. [Title in
English: A new approach for mining software repositories using semantic tools]. M.Sc. Diss. Port. Presentation: 15/08/2013.
178 p. Advisor: Daniel Schwabe.
Abstract: The mining of software repositories is a field of research that
extracts and analyses information available in software repositories, such as a
version control system or issue tracker. Currently, several research works in
this area have used Semantic Web tools during the extraction process to overcome
some limitations that the traditional approach faces. The objective of this work
is to extend the existing approaches that use Semantic Web tools to mine
information not considered in these works. The objective of this work is
to extend this approches using Semantic Web to mine information not currently
considered. One of these information is the relationship between revisions of
version control and changes that occurs in the Abstract Syntax Trees of files
modified by these revisions. Additionally, this new approach allows modelling
interdependence of software project, thrir licences and extracting information
from builds generated by continuous integration tools. The validation of this
approach is demonstrated through a set of questions that are asked by developers
and managers during the execution of a project and have been identified in
various works in the literature. We show how the questions are translated into
SPARQLE queries and how this work can answers the questions that are not
answered or partially answered in other tools.
[13_PhD_medeirosneto]
Francisco Dantas de MEDEIROS NETO.
On the role of
composition properties on program stability. [Title in Portuguese: Análise de
propriedades de codigos de composição em estabilidade de programas]. Ph.D.
Thesis. Eng. Presentation: 15/03/13. 166 p. Advisor: Alessandro Fabrício Garcia.
Abstract: The demand for incremental software development has driven a
search for advanced programming techniques, such as aspect-oriented programming
and feature-oriented programming. These techniques share the goal of supporting
localized implementation of software changes in order to promote program
stability. To achieve this goal, they offer a wide range of sophisticated
composition mechanisms, which provide means to flexibly define the composition
of two or more modules in a program. However, given the complexity of the
resulting composition code, the initial definition and further changes to a
single composition specification might
affect the structure and behaviour of multiple modules, thereby harming the
program stability. A complicating
factor is that those changes often require some reasoning about certain
composition properties, which are not explicit in the implementation or design
artefacts. Unfortunately, there is no understanding in the state of the art
about the composition properties that affect positively or negatively the
program stability. This understanding is not yet possible as: (i) there is no
conceptual characterization and quantification means for composition code
properties, and (ii) there is no empirical investigation on the influence of
these properties on program stability. A side effect of these gaps is that
developers have resorted to conventional metrics, such as coupling, to determine
or predict the stability of a program implemented with advanced programming
techniques. In this context, this thesis presents three contributions to
overcome the aforementioned problems.
First, we have developed an empirical study revealing that widely-used metrics,
such as coupling, are not effective indicators of stability when advanced
programming techniques are used. Second, we propose a measurement framework
encompassing a suite of composition metrics intended to quantify properties of
the composition code. This framework is based on a meta-model and terminology
for characterizing the elements and properties of the composition code. This
framework is extensible and agnostic to particular programming techniques. Third,
we also investigate how to alleviate the maintenance effort in performing
changes related to the composition code.We evaluate if the availability of
design models enriched with specification of composition properties help
developers to improve program stability in their maintenance tasks.
[13_PhD_santanna]
Francisco Figueiredo Goytacaz SANT'ANNA.
Safe systems-level
concurrency on resource-cosntrained nodes with Céu.
[Title in Portuguese: ].
Ph.D. Thesis. Eng.
Presentation: 12/09/13. 86 p. Advisors: Roberto Ierusalimschy and Noemi da La
Roque Rodriguez.
Abstract: Despite the continuous research to facilitate Wireless Sensor
Networks development, most safety analysis and mitigation eorts in concurrency
are still left to developers, who must manage synchronization and shared memory
explicitly. We propose a system language that ensures safe concurrency by
handling threats at compile time, rather than at runtime. The synchronous and
static foundation of our design allows for a simple reasoning about concurrency
that enables compile-time analysis resulting in deterministic and memory-safe
programs. As a trade-o, our design imposes limitations on the language
expressiveness, such as doing computationally-intensive operations and meeting
hard real-time responsiveness. To show that the achieved expressiveness and
responsiveness is su-cient for a wide range of WSN applications, we implement
widespread network protocols and the CC2420 radio driver. The implementations
show a reduction in source code size, with a penalty of memory increase below
10% in comparison to nesC. Overall, we ensure safety properties for programs
relying on high-level control abstractions that also lead to concise and
readable code.
[13_PhD_guimaraes]
Francisco José Zamith GUIMARÃES.
O uso de histórias
como forma de explicitar o conhecimento tático.
[Title in English:The use of stories as a way to explicit the tacit knowledge].
Ph.D. Thesis. Port.
Presentation: 25/03/13. 158 p. Advisor: Daniel Schwabe.
Abstract: With the globalization process, companies are increasingly
decentralized, which makes it a challenge
to exchange knowledge between employees. In literature, several Knowledge
Management practices aims at facilitating the knowledge exchange, but many of
them have difficulties in collecting and later reuse that knowledge. Some of
these practices are based on the employee experience exchange, through sharing
stories.
The objective of this thesis is to define a model for the representation of
stories (ontology), a set of activities, rules and tools to improve the
collective dynamic gathering and the reuse of knowledge.Through some experiments
we observed that this proposed model is better than other models regarding the
collection and reuse of knowledge.
[13_MSc_chequer]
Gabriel Agostini CHEQUER. Alinhamento entre arquitetura empresarial e PDTI: um
estudo de caso. [Title in English: Aligning
enterprise architecture with IT planning: a case study]. M.Sc. Diss. Port.
Presentation: 23/08/13. 138 p. Advisor: Arndt von Staa.
Abstract: Good IT planning is essential to enable and leverage the
organizational performance. The Information Technology Strategic Plan (ITSP) is
a planning tool that defines the IT strategies and action plans. An
essential part of the ITSP is the construction of an Enterprise Architecture,
which integrates business processes, technologies and informationsystems of an
organization to supprt the business goals. The objective of this work is to
analyze the role of Enterprise Architecture in the context of an ITSP using a
method that supports the identification os suggestions to an Infornmation
Architecture Process in use by the Software Enginering Laboratory (LES) of The
Catholoc University of Rio de Janeito (PUC/Rio). An assessment was
performed. The results are encouraging.
[13_MSc_lopes]
Guilherme Alves LOPES.
Um framework para simulação de
microbacias como serious game. [Title in English: A framework for microbasins simulation as a serious game]. M.Sc. Diss. Port. Presentation:
04/04/13. 52 p. Advisor: Bruno Feijó.
Abstract: This dissertation presents a framework for the simulation of a
watershed environment as a serious game, where the commitment of a more
realistic representation of characters, processes and environment is aligned
with the usual features of entertaining games. The game aims to assist education
and discussing topics on economic sustainability and environmental preservation.
In addition to the physical simulation of the terrain, this work adds a new
functionality to the simulator which makes it capable of simulating the
interaction of the inhabitants of a small watershed with the terrain and the
interaction between them as a game. The framework allows us to implement logic
in the agents that simulate inhabitants, and the settings for updating prices
and values of production.
[13_MSc_cunha]
Guilherme Carvalho CUNHA.
Reconhecimento de emoções através de
imagens em tempo real com o uso de ASM e SVM. [Title in English: Real time
emotion recognition based on images using ASM and SVM]. M.Sc. Diss. Port.
Presentation: 06/04/13. 59 p. Advisor: Bruno Feijó.
Abstract: The facial expressions provide a high amount of information about
a person, making the ability to interpret them a high valued task that can be
used in several fields of Informatics, such as Human Machine
Interface, Digital Games, interactive storytelling and digital TV/Cinema. This
dissertation discusses the process of recognizing emotions in real time using
ASM (Active Shape Model) and SVM (Support Vector Machine) and
presents a comparison between two commonly used ways when extracting the
attributes: neutral face and average. As such comparison can not be found in the
literature, the results presented are valuable to the development of
applications that deal with emotion expression in real time. The current study
considers six types of emotions: happiness, sadness, anger, fear, surprise and
disgust.
[13_PhD_piccinini]
Helena Serrão PICCININI.
W-Ray - uma abordagem para
publicação de dados da deep Web. [Title in English: W-Ray - an approach
to the deep web data publication]. Ph.D. Thesis. Port. Presentation: 21/06/13.
195 p.
Advisor: Marco Antonio Casanova.
Abstract: The Deep Web comprises data stored in databases, dynamic pages,
scripted pages and multimedia data, among other types of objects. The databases
of the Deep Web are generally underrepresented by the search engines due to the
technical challenges of locating, accessing and indexing them. The use of
hyperlinks by search engines is not sufficient to achieve all the Deep Web data,
requiring interaction with complex queries interfaces. This thesis presents an
approach, called W-Ray, that provides visibility to Deep Web data. The approach
relies on describing the relevant data through well-structured sentences, and on
publishing the sentences as static Web pages. The sentences can be generated
with embedded RDFa, keeping the semantics of the database. The Web pages thus
generated are indexed by traditional Web crawlers and sophisticated crawlers
that support semantic search. It is also presented a tool that supports the
W-Ray approach. The approach has been successfully implemented for some real
databases.
[13_MSc_crisologobohorquez]
Isabel Bebelú Crisólogo BOHORQUEZ.
Carioca:
framework para la configuración de ambientes inteligentes utilizando
dispositivos con recursos limitados. [Title in
English:
Carioca: framework for the configuration of smart environments using
resource-constrained devices]. M.Sc. Diss. Port. Presentation:
10/06/2013. 68 p. Advisor: Hogo Fuks.
Abstract:
In this work we
introduce a framework in order to have available "Things" in to the Web making
use of a wireless sensor network with limited resource nodes following the
paradigm of the Web of Things. We focused on those type of nodes that do not
have an IP address and have capacity of limited memory. We offer one tool that
will be reusable and configurable for scenarios inside the area of Smart Home
that united with the wireless sensor network make possible to monitor and
intervene in to the physical environment through the Web. For the implementation
of wireless sensor networks applications we used the TERRA tool which offers
remote configuration and remote programming of the nodes. A functional
evaluation and a case study were realized. In evaluating the system we looked
for prove that the initial planned objectives was reflected in our software. The
case study was realized for the programmer user of the framework. The most
important contribution of this work is offering a framework for the monitoring
of this sensor nodes and the adequacy of the framework to other applications of
physical environments. These contributions were based on an approach that makes
possible the distributed programming of nodes using TERRA in devices of limited
resources.
This work shows all the process of prototyping the framework, the application
cases, and the difficulties presented.
[13_PhD_macia]
Isela MACIA BERTRAN.
On the detection of architecturally-relevant code anomalies in software systems. [Title in
Portuguese: Detecção de anomalias de códigos arquiteturalmente relevantes em
sistemas de software]. Ph.D. Thesis. Eng. Presentation: 20/03/13. 260 p. Advisor:
Arndt Von Staa.
Abstract: Code anomalies can signal software architecture degradation.
However, the identification of architecturally-relevant code anomalies (i.e.
code anomalies that strongly imply architectural deficiencies) is particularly
challenging due to: (i) lack of understanding about the relationship between
code anomalies and architectural degradation, (ii) the focus on source code
anomaly detection without considering how it relates to the software
architecture, and (iii) lack of knowledge about how reliable these detection
techniques are when revealing architecturally-relevant code anomalies. This
thesis presents techniques for identifying architecturally-relevant code
anomalies. Architecture-sensitive metrics and detection strategies were defined
to overcome the limitations of conventional detection strategies. These metrics
and strategies leverage traces that can be established between architectural
views and system implementation. The thesis also documents code anomaly patterns
(i.e. recurring anomaly relationships) that are strongly related to
architectural problems. A tool, called SCOOP, was developed to collect the
architecture-sensitive metrics, apply the new detection strategies, and identify
the documented code anomaly patterns. Using this tool, we evaluated our
technique in a series of empirical studies, comparing its accuracy with that of
conventional detection techniques when identifying architecturally-relevant code
anomalies.
[13_MSc_coelho]
Jeferson Rômulo Pereira COELHO.
Uma solução eficiente para subdivisão de malhas triangulares. [Title in
English: An efficient solution for triangular mesh subdivision]. M.Sc. Diss. Port. Presentation: 26/03/13.
92 p. Advisor: Marcelo Gattass.
Abstract: Subdivision of triangular surfaces is an important problem in
modeling and animation activities. Deforming a surface can be greatly affected
the quality of the triangulation when as equilateral triangles become elongated.
One solution to this problem is to refine the deformed region. Refinement
techniques require
the support of topological data structure. These structures must be efficient in
terms of memory and time. An additional requirement is that these structures
must also be easily stored in secondary memory. This dissertation proposes a
framework based on the Corner Table data structure with support for subdivision
of triangular meshes. The proposed framework was implemented in a C ++ library.
With this library this work presents a set of test results that demonstrate the
desired efficiency.
[13_MSc_palomarespecho]
Jéssica Margarita PALOMARES PECHO.
Estudo e
avaliação de técnicas de separabilidade e da oclusão em superfícies multitoque.
[Title in English: Study and evaluation of separability techniques and occlusion
in multitouch surfaces]. M.Sc. Diss. Port. Presentation: 12/09/13 103 p. Advisor:Alberto
Barbosa Raposo.
Abstract: he multitouch technology is emerging as a trend in recent years.
The multitouch interfaces allow interacting with a virtual object directly,
similar to a real object. However, this technology has not only advantages.
There are several issues to be resolved, such as the accuracy of the
manipulation, the occlusion, the separability of the manipulation, etc. The
multitouch interfaces allow multiple spatial transformations that can be
performed on a virtual object with only a gesture. For example, an object can be
rotated, translated and scaled with two fingers with a single gesture. However,
some unwanted movements may occur accidentally. The technique of separability
appears with the intent to prevent unwanted movements on multitouch surfaces.
Occlusion is another problem that occurs in multitouch interfaces. Often the
user’s hand hides the vision of the object with which he/she interacts; or the
user’s action on interface hinders the movement when it clicks on a bottom that
triggers action. This dissertation studies, proposes and evaluates two
techniques of separability, aiming to reduce the problems that arise due to
excessive freedom of manipulation in multi-touch interfaces, and evaluates the
efficiency of these techniques. The techniques developed are not only applicable
in simple virtual objects; they are also for WIMP (windows, icons, menus,
pointer) objects, aiming to reduce occlusion. A series of tests was performed to
evaluate precision, occlusion time for completion of task, and ease of use.
[13_MSc_motta]
José Antonio Goncalves MOTTA.
Investigating the case-based
reasoning process during early HCI design [Title in Portuguese: Investigando o
processo de raciocínio baseado em casos durante o início do design de IHC].
M.Sc. Diss. Eng. Presentation: 21/02/13. 158 p. Advisor: Simone Diniz Junqueira
Barbosa.
Abstract: During the early stages of design, the designer forms an initial
understanding about the problem and some ideas on how to solve it, often
influenced by previous design knowledge. In order to support HCI design in this
context, we investigated ways to use case-based reasoning (CBR) to help
designers access and reuse design knowledge to solve new HCI design problems. We
conducted interviews with professional HCI designers to collect data about how
they deal with design problems, and their motivations and expectations regarding
the use of design knowledge aided by a CBR tool. Using this data, we designed
and developed a tool called CHIDeK,
which has a library containing HCI design cases and provides access to them
through faceted navigation, direct links between cases, and search. To
investigate the way CHIDeK influences the design activity, we conducted a study
that simulated the early stage of HCI design of an online bike reservation
system. Some participants could solve the problem while having access to CHIDeK
and others had to solve it without CHIDeK. We discovered that the cases from
CHIDeK supported the design by motivating the designers reflective process,
triggering their memories of experiences with systems similar to the ones in
cases, and helping generate new ideas. We also identified some limitations in
the case representation, which offers an opportunity for further research. When
comparing both kinds of design activities, we noticed that designers without the
case library used the same solution for one of the issues described in the study
scenario, while the designers with the cases varied between two solutions. We
concluded that a CBR tool has much potential to aid the design activity, but
there are still issues that need to be addressed by further research.
[13_PhD_alvim]
Leandro Guimarães Marques ALVIM.
Estratégias de negociação
de ativos financeiros utilizando agendamento
por intervalos ponderados. [Title in English: Weighted interval scheduling
resolution for building financial market trading strategies]. Ph.D. Thesis.
Port. Presentation: 14/03/13. 80 p. Advisor: Ruy Luiz Milidiu.
Abstract: There are different types of investors who make up the financial
market and produce market opportunities at different time scales. This indicates
a heterogeneous market structure. In this thesis, we conjecture that may have
more predictive opportunities than others, what motivates research and
construction of what we denominate multirresolution optimal strategies. For
multirresolution strategies there are time series decomposition approaches for
operating at different resolutions or proposals for dataset construction
according to multirresolution trading optimal decisions. The other approaches,
are single resolutions. Thus, we address two problems, maximizing cumulative
returns and maximizing cumulative returns with risk control. Here, we propose
solving the Weighted Interval Scheduling problem to build multirresolution
strategies. Our methodology consists of dividing the market day into time
intervals, specialize traders by interval and associate a prize to each trader.
For the cumulative return maximization problem, the prize corresponds to
cumulative returns between days for the associated trader operation interval.
For the cumulative return maximization problem with risk control each trader
prize corresponds to cumulative return divided by risk with associated operation
interval. In order to control the risk, we employ a set of traders by interval
and apply the Markowitz Mean-Variance method to find optimal weight for set of
traders. Here, we conjecture that controlling each interval risk leads to the
overall risk control of the day. For signaling buy and sell orders, our traders
use opportunity detectors. These detectors correspond to Machine Learning
algorithms that process technical analysis indicators, price and volume data. We
conducted experiments for ten of the most liquid BMF&Bovespa stocks to a one
year span. Our Trading Team Composition strategy results indicates an average of
0.24% daily profit and a 77.24% anual profit, exceeding by 300% and 380%,
respectively, a single resolution strategy.Regarding operational costs, CTT
strategy is viable from US$ 50,000. For the cumulative return maximization
problem under risk control, our Portfolio Composition by Intervals strategy
results indicates an average of 0.179% daily profit and a 55.85% anual profit,
exceeding a Markowitz Mean-Variance method. Regarding operational costs, CCI
strategy is viable from US$ 2,000,000. Our main contributions are: the Weighted
Interval Scheduling approach for building multirresolution strategies and a
portfolio composition of traders instead of stocks performances.
[13_MSc_rodrigues]
Livia Couto Ruback RODRIGUES.
LDC Mediator: a mediator
for linked data cubes. [Title in Portuguese: Mediador LDC: um mediador de cubos
de dados interligados]. M.Sc. Diss. Eng. Presentation: 12/09/13. 69 p. Advisor:
Marco Antonio Casanova.
Abstract: A statistical data set comprises a collection of observations made
at some points across a logical space and is often organized as what is called a
data cube. The proper definition of the data cubes, especially of their
dimensions, helps to process the observations and, more importantly, helps to
combine observations from different data cubes. In this context, the Linked Data
Principles can be profitably applied to the definition of data cubes, in the
sense that the principles offer a strategy to provide the missing semantics of
the dimensions, including their values. This work introduces a mediation
architecture to help consume linked data cubes, exposed as RDF triples, but
stored in relational databases. The data cubes are described in a catalogue
using standardized vocabularies and are accessed by HTTP methods using REST
principles. Therefore, this work aims at taking advantage of both Linked Data
and REST principles in order to describe and consume linked data cubes in a
simple but efficient way.
[13_MSc_silva] **
Luiz Felipe de Souza e SILVA. Inspeção não intrusiva da
comunicação em aplicações baseadas em RPC. [Title in English: Non-Intrusive
communication inspection in RPC based applications]. M.Sc. Diss. Port.
Presentation: 08/04/13. 85 p. Advisor: Renato Fontoura de Gusmão Cerqueira.
Abstract: Software debugging is an activity that typically requires an huge
effort, primarily due to the need to analyze multiple conditions that determine
the execution context of the software and to create assumptions that explain the
problem, for only thus being able to fix it. In the case of distributed systems,
parallelism, order scheduling, delays in communication and equipment
failures are examples of factors that further increase the complexity of the
debugging activity. Therefore, the search for tools that assist in this process
is continuous. In this dissertation, we propose a tool for non-intrusive
monitoring and visualization of communications between components of distributed
systems, based on communication protocol analyzers and
monitoring of read and write system calls.
[13_MSc_bomfim]
Marcela Costa Câmara do BOMFIM.
Avaliação de técnicas de
visualização 2D-3D e de gestão de atenção para operação de plantas industriais.
[Title in English: Evaluation of 2D-3D visualization and attention management
techniques for the operation of industrial plants]. M.Sc. Diss. Port.
Presentation: 11/03/13. 108 p. Advisor: Simone Diniz Junqueira Barbosa.
Abstract: 3D Visualization can be valid for different purposes. 2D
visualizations can decrease the occlusion in specific parts, show undistorted
angles and distances and enable precise positioning and navigation. Already the
3D visualization can provide an overview of the space, illustrate shapes of
objects, offering precise notions of size and distance and support free
navigation, and allows the user to understand how this place is in reality or
quickly recognize it when the place is already known physically. Despite the
advantages of the 3D environment, we are still encountering some challenges in
this environment, such as the occlusion of objects, which, in an environment of
real-time monitoring, can bring some dangers. The main objective of this
research is to provide an environment that supports a scenario of industrial
plants monitoring in real time, exploring techniques of information
visualization and scientific data in an integrated environment that mixes 2D and
3D visualizations, determining how important information will be displayed to
call the user’s attention through warnings about
risky situations. Currently, this monitoring is made on a 2D space plant and
some factors motivated the migration of this type of visualization for the 3D
environment, such as ease the perception of the mapping to the action vision,
communication between different teams who do not know the plant and the
intention of, in the future, make the operation system an integrated part of
other areas of expertise that already use this 3D environment. Thus, this
study’s main research question is to investigate ways of combining these
techniques and propose ways to handle the occlusion, the difficulties of
navigation in the 3D environment and different ways to draw the user’s attention,
considering both the events that are in his field of vision as those who are
outside of his field of view (due to the possibility of free navigation).
[13_MSc_bayser]
Maximilien Philipe Marie de BAYSER.
Flexible Composition
for c++11. [Title in Portuguese: Composição flexível em C++11]. M.Sc. Diss. Eng.
Presentation: 04/04/13. 107 p. Advisor: Renato Fontoura de Gusmão Cerqueira.
Abstract: Dependency injection, a form of inversion of control, is a way of
structuring the configuration and composition of software components that brings
many benefits such as a loose coupling of components. However, a generic
dependency injection framework requires runtime type introspection and this is
why dependency injection is popular in Java and almost non-existent in C++. In
this work we present a introspection system for C++11 and show how to use it to
improve an implementation of the Service Component Architecture (SCA) for C++.
It uses several features of C++11 such as perfect forwarding, variadic templates
and lvalue references to improve usability and minimize overhead.
[13_MSc_ribeiro]
Paula Ceccon RIBEIRO.
Desenvolvimento e avaliação de um
jogo em dispositivos móveis para estimular a comunicação de crianças com
autismo. [Title in English: Development and evaluation of a mobile game to
encourage communication among children with autism]. M.Sc. Diss. Port.
Presentation: 01/08/13. 93 p. Advisor: Alberto Barbosa Raposo.
Abstract: About 50% of people diagnosed with autism have dificulties in
developing any kind of functional language. This paper presents the development
and evaluation of a multi-user collaborative game for mobile tangible
interfaces. The game was designed based on requirements of a group of children
with autism, in order to stimulate communication through collaborative
strategies. The game was designed for interaction by pairs of users. Each user
has a mobile tangible interface to share game resources and a TV as a shared
space. The game was evaluated following research aspects related to the interest
of the users on the technology, the perception of each user's interlocutor and
communication intentions observed between the users to collaborate with each
other. Tests were carried out for 8 weeks with 4 children with autism. The
results indicate that both the environment provided by the technology used as
well as the strategies of the game have stimulated the users communication
through this shared space.
[13_PhD_taranti]
Pier-Giovanni TARANTI.
Uma arquitetura para controle de
atrasos de tempo em simulações baseadas em sistemas multiagentes. [Title in
English: An architecture to tame time tardiness in multiagent based simulations
]. Ph.D. Thesis. Port. Presentation: 27/03/13. 153 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: Virtual Environment Simulations (VES) are a special type of
simulation, often used to implement games and serious games with virtual space
representation and using both the next-event or stepped–time simulation time
advance approach. An example of serious games is the simulation used to support
War Games. Multiagent Based Simulation (MABS) are suitable to implement these
simulations because of their ability to handle with complexity and individual
actors modeling. However, when agents are responsible for advance their own
simulation time, a situation similar to a parallel simulation arises. This
implies in treat issues such as delays in performing scheduled actions (i.e.
tardiness) and its consequences in the virtual space representations. This
situation is worst in Java based MABS, because of Java technology
particularities. This work presents an approach to tame this tardiness and help
the development of these cited VES using agent oriented paradigm.
[13_MSc_andre]
Rafael de Pinho ANDRÉ.
Avaliação do uso de análise estática
na detecção de conflitos semânticos em tipos de dados. [Title in English:
Evaluation of static analysis in data type semantic conflict detection]. M.Sc.
Diss. Port. Presentation: 04/04/13. 95 p. Advisor: Arndt Von Staa.
Abstract: Within information system, faults can occur by the difference in
understanding of the parties involved regarding the meaning of data. This is a
well-known problem for software engineering and defects of this type have been
responsible for catastrophic failures, such as the Mars Climate Orbiter, in
1999. The current scenario of data processing and exchange, with high
information traffic volume and heterogeneous participants, increases system’s
vulnerability to these defects. Besides that, techniques of software quality
assurance are typically oriented to data structure and physical properties,
failing to efficiently address semantics issues. This work has the objective to
evaluate the use of static analysis to detect semantic conflicts in data types,
investigating its efficacy through an qualitative study comparing different
software quality assurance approaches. The static analysis tool VERITAS (VERIficador
esTÁtico Semântico) and the SemTypes notation were exclusively developed to
address the problem of semantic conflicts - adding a semantic control to the
types recognized by compilers – and are presented in this work.
[13_MSc_martins]
Rafael Jessen Werneck de Almeida MARTINS.
Recomendação de pessoas em redes sociais com base em conexões entre usuários. [Title in
English: People recommendation in social networks based on user connections]. M.Sc. Diss. Eng. Presentation: 09/01/13.
59 p. Advisor:
Karin Koogan Breitman.
Abstract: Social networking websites have gained importance in recent years.
In them, users can connect with other users to interact with. However,
generally, the number of registered users is very large. Therefore, find other
users with common interests is not easy. Recommender sytems are software tools
which generate suggestions for various types of items to users and can be
applied to recommend people (other users) on social networks. Systems that
recommend people use specific techniques and, due to social implications
involved in personal relationships, it must take several factors in
consideration. The lack of data available makes the task to generate good
recommendations more difficult. This paper discusses the theme and
presents a person recommendation systems for social networking websites based on
user connections. To test the system presented, we conducted an
experiment with Peladeiro, a real website of a social network that has over
500.000 users, where few reliable data available.
[13_MSc_vasconcelos]
Rafael Oliveira VASCONCELOS.
A dynamic load balancing
mechanism for data stream processing on DDS systems. [Title in Portuguese: Um
mecanismo de balanceamento de carga dinâmico para processamento de fluxo de
dados em sistemas DDS]. M.Sc. Diss. Eng. Presentation: 09/01/13. 75 p. Advisor:
Markus Endler.
Abstract: This thesis presents the Data Processing Slice Load Balancing
solution to enable dynamic load balancing of Data Stream Processing on DDS-based
systems (Data Distribution Service). A large number of applications require
continuous and timely processing of high-volume of data originated from many
distributed sources, such as network monitoring, traffic engineering systems,
intelligent routing of cars in metropolitan areas, sensor networks,
telecommunication systems, financial applications and meteorology. The key
concept of the proposed solution is the Data Processing Slice (DPS), which is
the basic unit of data processing load of server nodes in a DDS Domain. The Data
Processing Slice Load Balancing solution consists of a load balancer, which is
responsible for monitoring the current load of a set of homogenous data
processing nodes and when
a load unbalance is detected, it coordinates the actions to redistribute some
data processing slices among the processing nodes in a secure way. Experiments
with large data stream have demonstrated the low overhead, good performance and
the reliability of the proposed solution.
[13_MSc_marques]
Ricardo Cavalcanti MARQUES.
Algoritmo robusto para interseção de supefícies triangulares. [Title in English: Robust
algorithm for triangulated surfaces intersection]. M.Sc. Diss. Port. Presentation:
06/09/13. 117 p. Advisor: Hélio Cortes Vieira Lopes.
Abstract: The goal of this work is to design and develop an efficient,
reliable and accurate algorithm for the intersection of triangular surfaces that
represent complex geological models. The wide range of these models' cordinates
in contrast with the relatively small average size of its elements lead up to
numerical instability problems, which may generate bad models or crash the
geometric modeler. Additionally, a high degree of precision is desired in the
model to avoid accidents in the field of oil exploration. In this work, it
is proposed a solution to reduce the numerical issues by the use of some
geometrical strategies and the Exat Arithmetic. xampels are used ot
demosntrate these robustness problems and to validade the proposed algorithm.
[13_MSc_rodrigues]
Renato Deris PRADO.
Visualização de rótulo em objetos
de modelos massivos em tempo real. [Title in English: Real-time label
visualization in massive models objects]. M.Sc. Diss. Port. Presentation:
05/04/13. 57 p. Advisor: Alberto Barbosa Raposo.
Abstract: Virtual Labels are used in computer graphics applications to
represent textual information arranged on geometric surfaces. Such information
consists of names, numbering, or other relevant data that need to be noticed
quickly when a user scans the objects in the scene. This paper focuses on the
so-called massive models, as CAD models (Computer Aided Design) of oil
refineries, which have a large number of geometric primitives whose rendering
presents a high computational cost. In large engineering projects, the immediate
visualization of information specific to each object or parts of the model is
desirable, which, if displayed by conventional texturing techniques can
extrapolate the available computational resources. In this work we have
developed a way to view, in real time, virtual labels with different information
on the surfaces of objects in massive models. The technique is implemented
entirely on the GPU, shows no significant loss of performance and low memory
cost. CAD models objects are the main focus of the work, although the solution
can be used in other types of objects once their texture coordinates are
adjusted correctly.
[13_MSc_ortiga]
Sergio Ricardo Batuli Maynoldi ORTIGA.
DCD Tool: um conjunto de ferramentas para descoberta e triplificação de cubos de
dados estatísticos. [Title in Portuguese: Catálogo de descrições de cubos de
dados interligados]. M.Sc. Diss. Eng. Presentation: 06/09/13. 225 p. Advisor:
Marco Antonio Casanova.
Abstract: The production of social indicators and their availability on the
Web is an important initiative for the democratization and transparency that
governments have been doing in the last two decades. In Brazil, several
government or government-linked institutions publish relevant indicators to help
assess the government performance in the areas of health, education, environment
and others. The access, query and correlation of these data demand substantial
effort, especially in a scenario involving different organizations. Thus,
the development of tools, with focus on the integration and availability of
information stored in such data bases, becomes a significant effort.
Another aspect that requires attention, in the case of Brazil, is the difficulty
in identifying statistical databases among others type of data that share the
same database. This dissertation proposes a software framework which
covers the identification of statistical data in the database of origin and
enrichment of their metadata using W3C statndadized ntologies, as basis for the
triplification process.
[13_MSc_abreuesilva]
Sofia Ribeiro Manso de ABREU e SILVA.
Catalogue of linked
data cube descriptions. [Title in Portuguese: Catálogo de descrições de cubos de
dados interligados]. M.Sc. Diss. Eng. Presentation: 28/06/13. 85 p. Advisor:
Marco Antonio Casanova.
Abstract: Statistical Data are considered one of the major sources of
information and are essential in many fields as they can work as social and
economic indicators. A statistical data set comprises a collection of
observations made at some points of a logical space and is often organized as
what is called a data cube. The proper definition of the data cubes, especially
of their dimensions, helps processing the observations and, more importantly,
helps combining observations from different data cubes. In this context, the
Linked Data principles can be profitably applied to the definition of data
cubes, in the sense that the principles offer a strategy to provide the missing
semantics of the dimensions, including their values. This dissertation first
describes a mediation architecture to help describing and consuming statistical
data, exposed as RDF triples, but stored in relational databases. One of the
features of this architecture is the Catalogue of Linked Data Cube Descriptions,
which is described in detail in the dissertation. This catalogue has a
standardized description in RDF of each data cube actually stored in statistical
(relational) databases. Therefore, the main discussion in this dissertation is
how to represent the data cubes in RDF, i.e., how to map the database concepts
to RDF in a way that makes it easy to query, analyze and reuse statistical data
in the RDF format.
[13_MSc_sousa]
Taissa Abdalla Filgueiras de SOUSA.
Sistema de recomendação
para apoiar a construção de gráficos com dados estatísticos. [Title in English:
Recommender system to support chart constructions with statistical data]. M.Sc.
Diss. Port. Presentation: 22/03/13. 180 p. Advisor: Simone Diniz Junqueira
Barbosa.
Abstract: Research on statistical data visualization emphasizes the need for
systems that assist in decision-making and visual analysis. Having found
problems in chart construction by novice users, we decided to research the
following question:How can we support novice users to create efficient
visualizations with statistical data? Thus we, created ViSC, a recommender
system that supports the interactive construction of charts to visualize
statistical data by offering a series of recommendations based on the selected
data and the user interaction with the tool. The system explores a visualization
ontology to offer a set of graphs that help to answer information-based
questions related to the current graph data. By traversing the recommended
graphs through their related questions, the user implicitly acquires knowledge
both on the domain and on visualization resources that better represent the
domain concepts of interest.This dissertation presents the problems that
motivated the research, describes the ViSC tool and presents the results of a
qualitative study conducted to evaluate ViSC. We used two methods in our
evaluation: the Semiotic Inspection Method (SIM) and the Retrospective
Communicability Evaluation (RCE) — a combination of the Communicability
Evaluation Method (CEM) and Retrospective Think Aloud Protocol. We first analyze
how the questions influence the users’ traversal through the graph and, then, we
address the broader question.
[13_MSc_pinto]
Thiago Delgado PINTO.
Uma ferramenta para geracao e execucao
automatica de testes funcionais baseados na discrição textual de casos de uso. [Title
in English: A tool for the automatic generation and execution of functional
tests based on the textual use case description]. M.Sc. Diss. Port. Presentation:
05/04/13. 190 p. Advisor: Arndt Von Staa.
Abstract: This master's dissertation presents a solution for the automatic
generation and execution of functional tests based on the textual use case
description and aims to verify whether certain application matches its
functional requirements defined by this documentation. The constructed tool is
capable of generating valued semantic test cases, of transforming them into
source code (for Java Swing and the TestNG and FEST frameworks, in the current
version), of executing them, of collecting the results and of analyzing whether
the application's use cases matches (or not) its requirements. The solution main
differentials includes the coverage of test scenarios that involves more than
one use case, the coverage of scenarios containing recursive flows, the
possibility of defining business rules using data existing in test databases, as
well as the automatic generation of test values, and the generation of semantic
functional tests in a format independent of programming languages and
frameworks.
[13_PhD_salmito]
Tiago Lima SALMITO.
Uma
abordagem flexível para o modelo de concorrência em estágios. [Title
in English: A flexible approach to staged events]. M.Sc. Diss. Port. Presentation:
02/09/13. 107 p. Advisor: Noemi de La Roque Rodriguez.
Abstract: The purpose of this work is to explore the flexibility provided by
staged event-driven concurrency model to integrate both cooperative event loops
and preemptive threads in a single high level abstraction. The
contributions of this work are focused on the extension of the staged model that
decouples the specification of concurrent applications of the decisions related
to the execution envirnoment, allowing them to be flexibly mapped to different
configurations according to the task scheduling needs and processing granularity
in secific parts of the application. In order to provide an adequate
definition of the concept of hybrid concurrency models, we propose a
classification system that is based on the combination of basic features of
concurrent systems and the possibility of parallel execution on multiple
processors. Based on this classification, we analyze the benefits and
drawbacks associated with each concurrency model, justifying the adoption of
models that combine threads and events in the same programming environment and
the extension of the staged model. Finally, we present the implementation
of the proposed model in the Lua programming language and is use in execution
scenarios that confirm the benefits of the extension of the staged model in the
specification of concurrent application.
[13_MSc_nascimento]
Vagner Barbosa do NASCIMENTO.
Modelagem e geração de
interfaces dirigidas por regras. [Title in English: Rule-based approach to
modeling and generation user interfaces]. M.Sc. Diss. Port. Presentation:
15/04/13. 150 p. Advisor: Daniel Schwabe.
Abstract: Today there is a countless number of applications developed for
the World Wide Web. These applications have user interfaces that should be able
to adapt to several usage scenarios, content and context changes and also to be
compatible with multiple browsers and devices. Furthermore, the design and
maintenance of interfaces that need adjustments depending on the business rules
of the application require much effort during the development life cycle of an
application. In order to assist in the design of these interfaces, some UIDL’s
(User Interface Description Languages) have been proposed aiming at providing a
level of abstraction so that the designer does not need to immediately focus
attention on concrete aspects during the development of an interface. This work
presents a proposal for modeling and generating interfaces of web applications
based on production rules. These rules define criteria for situations
determining the activation of an interface; for the selection the elements that
participate in the abstract composition and also for the mapping of specific
widgets that will be used in the rendering stage. The proposal contemplates a
method for modeling interfaces, an implementation architecture and a framework
for authoring and execution of the proposed interface models. An architecture is
also presented for building widgets as well as a concrete interface
interpretation and rendering machine from a hierarchy specification. The overall
goal of the proposal is to design interfaces more responsive to data and
contexts of use, including situations of adaptation, generating more flexible
and reusable interfaces.
[13_MSc_almeida]
Vitor Pinheiro de ALMEIDA.
Patient-Buddy-Build: customized mobile monitoring for patients with chronic
diseases. [Title in Portuguese: Patient-Buddy-Build: acompanhaento remoto
móvel cusomizável de pacientes com doenças crônicas]. M.Sc. Diss. Eng. Presentation:
02/10/13. 201 p. Advisor: Edward hermann Haeusler.
Abstract: This thesis consists of the development of a tool for generating
mobile applications that enables a customized form of remote monitoring of
patients with chronic deseases. This customization is based on parameters and
formal descriptions of patient preferences, the type of chronic desease,
monitoring procedure required by the doctor, prescribed medication and
information about the context (i.e. environment) of the patient, where the later
is to be obtained by sensors. Based on this data, the system will
determine which informations are more relevant to be acquired from the patient
from questionnaires and sensors embedded or connected to a smart phone.
Relevant information is the information that best helps to identify possible
changes in the monitoring process of a patient. This set of information
will be sent to the mobile application to the responsible physician. The medical
treatment and the kind of chronic desease will define the set of information to
be collected. It should be stressed that the goal is not to support
automatic diagnosis, but only to provide means for physicians to obtain updated
information about their patents, so as to follow remote monitoring of patients.
[13_MSc_rocalla]
Wilfredo Bardales RONCALLA.
On the lower bound for the
maximum consecutive sub sums problem. [Title in Portuguese: Sobre o limite
inferior para problemas das sub-somas consecutivas máximas]. M.Sc. Diss. Eng.
Presentation: 29/08/13. 60 p. Advisor: Eduardo Sany Laber.
Abstract: The Maximum Consecutive Subsums Problem (MCSP) arises in scenarios
of both theoretical and practical interest, for example, approximate pattern
matching, protein identification and analysis of statistical data to cite a few.
Given a sequence of n non-negative real numbers, The MCSP asks for the
computation of the maximum consecutive sums of lengths 1 through n. Since
trivial implementations allow to compute the maximum for a fixed length value,
it follows that there exists a naive quadratic procedure to solve the MCSP.
Notwithstanding the effort devoted to the problem, no algorithm is known which
is significantly better than the naive solution. Therefore, a natural question
is whether there exists a superlinear lower bound for the MCSP. In this work we
report our research in the direction of proving such a lower bound.
[13_MSc_caberatapia]
Ximena Alexandra CABERA TAPIA.
EnLiDa: enriquecimento
das descrições de linked data cubes. [Title in English: EnLiDa:
Enrichment of linked data cube descriptions]. M.Sc. Diss. Port. Presentation:
30/08/13. 83 p. Advisor: Marco Antonio Casanova.
Abstract: The term Linked Data refers to a set of RDF triples organized
according to certain principles that facilitate the publishing and consumption
of data using the Web infrastructure. The importance of the Linked Data
principles stems from the fact that they offer a way to minimize the
interoperability problem between databases exposed on the Web. This dissertation
proposes to enrich a database that contains Linked Data cube descriptions by
interconnecting the components of the data cubes with entities defined in
external data sources, using owl:sameAs triples. The dissertation proposes an
architecture consisting of two major components, the automatic enriching
component and the manual enriching component. The first component automatically
generates owl:sameAs triples, while the second component helps the user manually
define owl:sameAs triples that the automatic component was not able to uncover.
Together, these components therefore facilitate the definition of data cubes
according to the Linked Data principles.