Theses and Dissertations
2016
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2016. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 21/JULY/2017
[In construction; sometimes, digital versions may not be available yet]
[16_MSc_dominguez]
Alain DOMINGUEZ
FUENTES.
Sintonia fina automática com índices parciais. [Title in
English: Database self-tunning with partial indexes].
M.Sc Diss. Port. Presentation: 29/03/2016. 82 p. Advisor: Sergio Lifschitz.
Abstract:
Partial indexes are access structures on
the physical level of the databases. They are indexes that allow the definition
of a subset of tuples in a table through a conditional expression. This
dissertation studies the identification and subsequent automatic creation of
partial indexes that can contribute in improving the performance of a database
system. We propose an algorithm that examines, for each relevant query, the
indexable attributes set, for which the creation of a partial index could
influence the query optimizer to generate plans that are more efficient. We
perform data mining on indexable attributes patterns to obtain correlated
attributes according to their frequency in queries within the particular
workload. We obtain a proposal for a set of candidate partial indexes
considering also a benefit heuristics. We may consider a self-tuning analysis of
an index configuration with both complete and partial indexes. We have
implemented techniques and algorithms proposed in this research into DBX, a
framework that allows local and global self-tuning regarding relational
databases.
[16_MSc_diaz]
Alejandro DIAZ CENTENO.
Evaluation of physical-motor status of people with reduced mobility using motion
capture with Microsoft Kinect. [Title in Portuguese: Avaliação do estado
físico-motor de pessoas com mobilidade reduzida usando captura de movimento com
o Microsoft Kinect].
M.Sc. Diss. Eng. Presentation: 30/09/2016. 65 p. Advisor: Alberto Barbosa
Raposo.
Abstract:
The evaluation of the motor status of stroke patients and elderly people is done
by using qualitative scales without standardization or measuring instruments.
The scales are most common because they are relatively inexpensive and
accessible, but suffer the disadvantage of being subjective, variable, and
require prolonged training time. Moreover, assessment instruments, although more
accurate and objective, have the problem of being heterogeneous, usually very
expensive, and focused on specific goals. The rise in recent times of 3D sensors
with high accuracy and low cost, some of them well known as the Microsoft
Kinect, allows the use of motion analysis to quantify the deficit or success of
a physiotherapeutic or drug treatment in a quantitative and standardized way,
enabling the automatic comparison with standards of healthy people, and people
with the same stage of disease, or similar characteristics. The aim of this work
is to create a system using Microsoft Kinect for capturing and processing motor
status of patients with reduced mobility in a non-invasive way, providing
clinical feedback that allows the conduction of a quantitative and objective
evaluation of patients, enabling monitoring of disease progression and reduced
rehabilitation time.
[16_MSc_lucas]
Ana Paula Lima LUCAS. Gestão da
manutenção de software: um estudo de caso. [Title in English: Software
maintenance management: a case study].
M.Sc. Diss. Port. Presentation: 27/09/2016. 104 p. Advisor: Arndt von Staa.
Abstract:
The company participating in this work sought to implement activities related to
software maintenance due to problems related to the great occurrence of defects,
constant rework, among others. To suppress these problems, a preliminary study
of the system in question has been elaborated, evaluating the current state of
software maintenance. In view of the diagnosis, the necessity of changes became
evident concerning the way that the system maintenance activities were
conducted. With this, the search for alternatives of improvements began with the
objective of reducing the occurrence of defects and also increase the
maintainability of the system. Knowing the problems and what could be done to
improve; it was proposed to accede some practices of the software maintenance
maturity model - SMmm and to integrate the concepts of these practices into a
defined process and adapted to the needs of the system. To support this
implementation, the infrastructure used was the Team Foundation Service - TFS
platform that collaborated with the implementation of the selected practices
according to the requirements of the SMmm model, resulting in a defined process
supported by the TFS that partially implements the SMmm model. This paper
presents a case study with the objective of evaluating the benefits provided by
the use of some practices of the SMmm model. The evaluation carried out compared
the data from the preliminary study with data collected after adoption of the
practices, the analyzed results pointed out a significant reduction in the
number of issues.
[16_MSc_macdowell]
André Victor Gomes de Aboim MAC
DOWELL.
Uma API para exergames móveis com evento centrados em
microlocalização baseada em BLE fingerprinting. [Title in English:
An API for mobile exergames micro location events using BLE fingerpriting].
M.Sc. Diss. Port. Presentation: 04/06/2016. 106 p. Advisor: Markus Endler.
Abstract:
Smartphones are ever more present in the
day to day lives of our society, for both work and entertainment. In this mobile
platform, there is a growing number of games that uses sensing capabilities of
the Smartphone for its gameplay mechanics, like GPS for location-based games, a
category of mobile pervasive games. Although, there are categories of pervasive
games that require specific hardware capabilities not normally found in a
Smartphone, like precise proximity inference between devices and a more precise,
fast and reliable location solution then GPS. Simultaneously, both sensing and
beacon technologies for Internet of Things (IoT) are getting cheaper and more
available, and there are many micro locating solutions that uses these
technologies in different application contexts. In Mobile Exergames, a category
of pervasive games where the gameplay is outdoors and fast-paced, with constant
interaction between multiple players, a more precise location solution then GPS
is necessary. The development of these games includes the execution of game
sessions and their components, together with the interoperability of different
technologies. That way, in this work, we present a location strategy using
Fingerprinting and Bluetooth LE (BLE) beacons and a API for common location
requests and events. We analyze the location strategy through tests with
different configurations using a Pervasive Game Middleware with Session
management, and evaluate the location API through gameplay abstractions for a
few pervasive games.
[16_MSc_moreira]
Andrey D' Almeida Rocha RODRIGUES.
Visualização de modelos digitais de elevação em multiresolucão utilizando
programação em GPU. [Title in English:
Muiti-resolution visualization of digital elevaton models using GPU shaders].
M.Sc. Diss. Port. Presentation: 07/04/2016. 37 p. Advisor: Waldemar Celes Filho.
Abstract:
3D CAD Models have played an important role in engineering projects’ management.
It is noticeable in many of these files the presence of several objects with
implicit representation that end up being represented as triangular meshes.
Although suitable for rendering, the triangular mesh representation brings some
drawbacks, such as the ambiguity in objects with low discretization rate. The
reverse engineering aims to reconstruct this discrete representation to its
original continuous representation. In this work, we propose a novel methodology
for geometry reconstruction in CAD models using Support Vector Machines and
Shape Descriptors.
[16_MSc_castilhoneto]
Arthur Beltrão CASTILHO NETO.
Anotador de papeis semânticos para Português. [Title in English:
Semantic Role-Labeling for Portuguese].
M.Sc. Diss. Port. Presentation: 16/12/2016. 78 p. Advisor: Ruy Luiz Milidú.
Abstract: Semantic role-labeling (SRL) is an important task of natural
language processing (NLP) which allows establishing meaningful relationships
between events described in a given sentence and its participants. Therefore, it
can potentially improve performance on a large number of NLP systems such as
automatic translation, spell correction, information extraction and retrieval
and question answering, as it decreases ambiguity in the input text. The vast
majority of SRL systems reported so far employed supervised learning techniques
to perform the task. For better results, large sized manually reviewed corpora
are used. The Brazilian semantic role labeled lexical resource (Propbank.br) is
much smaller. Hence, in recent years, attempts have been made to improve
performance using semi supervised and unsupervised learning. Even making several
direct and indirect contributions to NLP, those studies were not able to
outperform exclusively supervised systems. This paper presents an approach to
the SRL task in Portuguese language using supervised learning over a set of 114
categorical features. Over those, we apply a combination of two domain
regularization methods to cut binary features down to 96%. We test a SVM model
(L2-loss dual support vector classification) on PropBank.Br dataset achieving
results slightly better than state-of-the-art. We empirically evaluate the
system using official CoNLL 2005 Shared Task script pulling 82.17% precision,
82.88% coverage and 82.52% F1. The previous state-of-the-art Portuguese SRL
system scores 83.0% precision, 81.7% coverage and 82.3% F1.
[16_MSc_pontes]
Bruno Silva PONTES.
Reconhecimento de
posturas humanas preservando a privacidade: um estudo de caso usando um sensor
térmico de baixa resolução. [Title in English:
Human posture recognition preserving privacy: a case study using a low
resolution array thermal sensor].
M.Sc. Diss. Port. Presentation: 30/09/2016. 78 p. Advisor: Hugo Fuks.
Abstract: Postures recognition is one of the human sensing challenges, that
helps ambient assisted livings in people accompanying. On the other hand, these
ambients assist doctors in the diagnosis of their patients’ health, mainly
through activities of daily livings real time recognition, which is seen in the
medical field as one of the best ways to anticipate critical health situations.
In addition, the world’s population aging, lack of hospital resources to meet
all people and increased health care costs drive the development of systems to
support ambient assisted livings. Preserving privacy in these ambients monitored
by sensors is a critical factor for user acceptance, so there is a demand for
solutions that does not requires images. This work demonstrates the use of a low
resolution thermal array sensor in human sensing, showing that it is feasible to
detect the presence and to recognize human postures, using only the data of this
sensor.
[16_PhD_mendes]
Carlos Augusto Teixeira MENDES.
GeMA, um novo framework para a prototipação, desenvolvimento e integração de
simulações multifísicas e multiescalas em grupos multidisciplinares. [Title in English:
GeMA, a new
framework for prototyping, development and integration of multiphysics and
multiscale simulations in multidisciplinary groups].
Ph. D. Thesis. Port. Presentation: 01/04/2016. 168 p. Advisor: Marcelo Gattass.
Abstract:
Petroleum exploration and production is a complex task where the use of physical
models is imperative to minimize exploration risks and maximize the return on
the invested capital during the production phase of new oil fields. Over time,
these models have become more and more complex, giving rise to a tendency of
integration between several simulators and the need for new multiphysics
simulations, where single-physics models are solved together in a coupled way.
This work presents the GeMA (Geo Modelling Analysis) framework, a library to
support the development of new multiphysics simulators, allowing both the
coupling of new models built with the framework as a base and the integration
with pre-existing simulators. Its objective is to promote the use of software
engineering techniques, such as extensibility, reusability, modularity and
portability in the construction of engineering physical models, allowing
engineers to focus on the physical problem formulation since the framework takes
care of data management and other necessary support functions, speeding up code
development. Built to aid during the entire multiphysics simulation workflow,
the framework architecture supports multiple simulation and coupling paradigms,
with special emphasis given to finite element methods. Being capable of
representing the spatial domain by multiple discretizations (meshes) and
exchanging values between them, the framework also implements some important
concepts of extensibility, through the combined use of plugins and abstract
interfaces, configurable orchestration and fast prototyping through the use of
the Lua language. This work also presents a set of test cases used to assess the
framework correctness and expressiveness, with particular emphasis given to a 2D
basin model that couples FEM non-linear temperature calculations based on finite
elements, mechanical compaction and hydrocarbon maturation and generation.
[16_MSc_marques]
Daniel dos Santos MARQUES.
A decision tree learner for cost-sensitive binary classification. [Title in
Portuguese:
Uma
árvore de decisão para classificação binária sensível ao custo].
M.Sc. Diss. Eng. Presentation: 22/09/2016. 46 p. Advisor: Eduardo Sany Laber.
Abstract:
Classifcation problems have been widely studied in the machine learning
literature, generating applications in several areas. However, in a number of
scenarios, misclassi cation costs can vary substantially, which motivates the
study of Cost-Sensitive Learning techniques. In the present work, we discuss the
use of decision trees on the more general Example-Dependent Cost-Sensitive
Problem (EDCSP), where misclassi cation costs vary with each example. One of the
main advantages of decision trees is that they are easy to interpret, which is a
highly desirable property in a number of applications. We propose a new
attribute selection method for constructing decision trees for the EDCSP and
discuss how it can be eciently implemented. Finally, we compare our new method
with two other decision tree algorithms recently proposed in the literature, in
3 publicly available datasets.
[16_PhD_baia]
Davy de Medeiros BAÍA.
Modelagem de contextos dinâmicos em simulação de gestão de projetos de software
baseada em multiagentes. [Title in Portuguese:
Dynamic modelling in software project management simulation
based on multi-agent].
Ph.D. Thesis. Port. Presentation: 08/03/2016. 201 p. Advisor: Calos José Pereira
de Lucena.
Abstract:
Software Project Management is not a trivial task, especially with the changes
that occur during the course of its execution. Generally, software project has
elements such as tasks and human resources. Each of these elements has its
features and their relationships with each other. These elements, their features
and their relationships defines a context.We define as dynamic context changes
that occur in the context during execution of the project. Persons involved in
project decision-making need to deal with this dynamic context, thus increasing
the complexity of the project management. Simulations often apply to support
answer specific questions or phenomena on a domain, through analysis of
experiments or executions results. For this, simulations use models that capture
details of specifics domains. Multi-agent systems modeling provides robust
models to represent real-world environments that are complex and dynamic. One of
the advantages of using multi-agent-based simulation is its ability to support
realistic aspects of project management, incorporating its elements through
agents. However, there is a lack of an approach to simulate the software
projects management, to represent the context, execute scenarios to assist in
the decision-making process and to support the dynamic context that occurs
throughout the project execution. In this context, we propose a conceptual model
based on multi-agent system and software project management simulation, the
ProMabs. This conceptual model contains five components to model the context and
its dynamics, thus, execute simulations of scenarios that assist in the
decision-making process. As contributions, this thesis presents ProMabs with
three instances based on different technologies: a multi-agent programming
platform, a simulation environment, and a simulation tool. Therewith, evaluate
the ProMabs from three different perspectives. The application of ProMabs with
these technologies, allows to represent elements and their relationships, create
scenarios to assist decision-making and support adaptive software project, i.e.,
with dynamic context. Finally, we present an experiment with qualitative and
quantitative analysis, with application of the ProMabs to represent the context,
support its dynamic and assist decision-making by means of scenarios. The
experimental results are positive indications that the instantiation of the
ProMabs as supported by a simulation tool assists the participants in decision
making.
[16_MSc_silva]
Djalma
Lúcio Soares da SILVA.
Uso de estruturas planares extraídas de imagens RGB-D em
aplicações de realidade aumentada. [Title in English:
Using Planar Structures Extracted from RGB-D Images in Augmented Reality
Applications]. M. Sc. Diss. Port. Presentation: 01/08/2016. 68 p. Advisor:
Alberto Barbosa Raposo.
Abstract: This dissertation investigates the use of planar geometric
structures extracted from RGB-D images in Augmented Reality Applications. The
model of a scene is essential for augmented reality applications. RGB-D images
can greatly help the construction of these models because they provide geometric
and photometric information about the scene. Planar structures are prevalent in
many 3D scenes and, for this reason, augmented reality applications use planar
surfaces as one of the main components for projection of virtual objects.
Therefore, it is extremely important to have robust and efficient methods to
acquire and represent the structures that compose these planar surfaces. In this
work, we will present a method for identifying, targeting and representing
planar structures from RGB-D images. Our planar structures representation is
triangulated two-dimensional polygons, simpli- fied and textured, forming a
triangle mesh intrinsic to the plane that defines regions in this space
corresponding to surface of objects in the 3D scene. We have demonstrated
through various experiments and implementation of an augmented reality
application, the techniques and methods used to extract the planar structures
from the RGB-D images.
[16_PhD_sarmiento]
Edgar SARMIENTO CALISAYA.
Analysis of
natural language scenarios. [Title in Portuguese:
Análise de
cenários em linguagem natural].
Ph. D. Thesis. Eng. Presentation: 13/04/2016. 231 p. Advisor: Julio Cesar
Sampaio do Prado Leite.
Abstract:
Requirements analysis plays a key role in
the software development process. Natural language-based scenario
representations are often used for writing software requirements specifications
(SRS). Scenarios written using natural language may be ambiguous, and,
sometimes, inaccurate. This problem is partially due to the fact that
relationships among scenarios are rarely represented explicitly. As scenarios
are used as input to subsequent activities of the software development process
(SD), it is very important to enable their analysis; especially to detect
defects due to wrong information or missing information. This work proposes a
Petri-Net and Natural Language Processing (NLP) based approach as an effective
way to analyze the acquired scenarios, which takes textual description of
scenarios (conform to a meta-model defined in this work) as input and generates
an analysis report as output. To enable the automated analysis, scenarios are
translated into equivalent Place/Transition Petri-Nets. Scenarios and their
resulting Petri-Nets can be automatically analyzed to evaluate some properties
related to unambiguity, completeness, consistency and correctness. The
identified defects can be traced back to the scenarios, allowing their revision.
We also discuss how unambiguity, completeness, consistency and correctness of
scenariobased SRSs can be decomposed in related properties, and define
heuristics for searching defect indicators that hurt these properties. We
evaluate our work by applying our analysis approach to four case studies. The
evaluation compares the results achieved by our tool-supported approach, with an
inspection based approach and with related work.
[16_MSc_reis]
Eduardo de Jesus Coelho REIS.
Anotação morfossintática a partir de contexto morfológico. [Title in English:
Morphosyntactic annotation based on morphological context].
M.Sc. Diss. Port. Presentation: 27/09/2016. 91 p. Advisors: Ruy Luiz Milidiú.
Abstract:
Part-of-speech tagging is one of the primary stages in natural language
processing, providing useful features for performing higher complexity tasks.
Word level representations have been largely adopted, either through a
conventional sparse codification, such as bag-of-words, or through a distributed
representation, like the sophisticated word embedded models used to describe
syntactic and semantic information. A central issue on these codifications is
the lack of morphological aspects. In addition, recent taggers present per-token
accuracies around 97%. However, when using a persentence metric, the good
taggers show modest accuracies, scoring around 55−57%. In this work, we
demonstrate how to use n-grams to automatically derive morphological sparse
features for text processing. This representation allows neural networks to
perform POS tagging from a character-level input. Additionally, we introduce a
regularization strategy capable of selecting specific features for each layer
unit. As a result, regarding n-grams selection, using the embedded
regularization in our models produces two variants. The first one shares
globally selected features among all layer units, whereas the second operates
individual selections for each layer unit, so that each unit is sensible only to
the n-grams that better stimulate it. Using the proposed approach, we generate a
high number of features which represent relevant morphosyntactic affection based
on a character-level input. Our POS tagger achieves the accuracy of 96.67
% in the Mac-Morpho corpus for Portuguese.
[16_MSc_cruz]
Felipe João Pontes da CRUZ.
Sistemas
de recomendação utilizando Máquinas de Boltzmann restritas. [Title in
English:
Uma
análise visual dos dados de GPS dos ônibus no Rio].
M.Sc. Diss. Port. Presentation: 23/02/2016. 53 p. Advisor: Ruy Luiz Milidiú.
Abstract: Recommender systems can be used in many problems in the real
world. Many models were proposed to solve the problem of predicting missing
entries in a specific dataset. Two of the most common approaches are
neighborhood-based collaborative filtering and latent factor models. A more
recent alternative was proposed on 2007 by Salakhutdinov, using Restricted
Boltzmann Machines. This models belongs to the family of latent factor models,
in which, we model latent factors over the data using hidden binary units. RBMs
have shown that they can approximate solutions trained with a traditional matrix
factorization model. In this work we'll revisit this proposed model and
carefully detail how to model and train RBMs for the problem of missing ratings
prediction.
[16_MSc_ismerio]
Fernando Cardoso ISMÉRIO.
Wearables in core stabilization. [Title in Portuguese:
StableBelt: wearables
em estabilização segmentar].
M.Sc. Diss. Eng. Presentation: 21/03/2016. 130 p. Advisor: Hugo Fuks.
Abstract:
In this dissertation, different types of audio biofeedback (ABF) for core
stabilization exercises using motion sensors are investigated. Core
stabilization exercises are one of the strategies used in the treatment of low
back pain. The Supine Bridge (SB) exercise was chosen as the focus for the
investigation. The primary motion sensor used was a tri-axial accelerometer.
Flex Sensors, Force Sensitive Resistors and multiple accelerometers were also
used in other prototypes. The results of this dissertation, which include data
from accelerometer, comments, process, reflections, and implementation of
prototypes that generate 3 types of audio biofeedback, were gathered during 5
cycles of action research. In action research, the researcher conducts the
research performing successive actions that attempt to reduce a specific problem
in a real world environment. In this dissertation, the environment chosen was a
place where a patient executes exercises and the problem identified is the
difficulty of the patient to perform the exercises correctly. The action was the
introduction of a wearable – StableBelt – which generates audio biofeedback
based on the patient’s movements during a core stabilization exercise. Different
types of audio were investigated: instrumental music, piano and drums. The
StableBelt was evaluated through 3 user tests. After a preliminary test with one
participant, user tests with 5 and 8 participants were conducted. In the
preliminary test, instrumental music was used and piano and drums in later
tests. The last cycle of the action research was dedicated to the comfort of the
StableBelt. During the investigation, physical therapists which research low
back pain and physical therapists which use core stabilization exercises in
their clinical practice were interviewed.
[16_PhD_silva]
Greis Francy Mireya SILVA CALPA.
Estratégias para suporte à colaboração em
sistemas presenciais para pessoas com Transtorno do Espectro Autista. [Title in Portuguese:
Strategies to support collaboration in face-to-face systems for people with
Autism Spectrum Disorders]. Ph.D. Thesis. Port. Presentation: 22/12/2016. 208 p. Advisor:
Alberto Barbosa Raposo.
Abstract: Face-to-Face collaborative systems for people with autism spectrum
disorders use strategies to motivate/force the collaboration among users.
However, even the collaborative applications developed for this public, still do
not consider notions of awareness for these users that present difficulties to
understand the most basic concepts of a collaborative activity. Users with
autism present difficulties to recognize and to interpret gestures and mental
states of others, which restricts their capacity to understand implicit
information that are essential to being aware of what is happening around them,
and consequently, to perform the collaborative activities. In this work, we
investigate some questions about how to offer awareness support, especially for
users with low-functioning autism, in order to formulate and evaluate a set of
collaborative strategies to support the design of more appropriate collaborative
systems. For this purpose, we used the research-action methodology. Following
this methodology, we perform four research cycles of action and reflection about
proposed solutions, so that we could conceive the set of collaborative
strategies proposed. In this cyclic process, we verified that collaborative
systems shall offer awareness mechanisms in the interface (based on certain
requirements) in different levels of approximation of the collaboration as well
as activities to get users to know each dimension of collaboration, and
gradually understanding it as a whole. These aspects compose the set of
collaborative strategies conceived in this work.
[16_MSc_monteagudo]
Grettel MONTEAGUDO GARCIA.
Analyzing, comparing and recommending conferences.
[Title in Portuguese:
Análise, comparação e
recomendação de conferências].
M.Sc. Diss. Port. Presentation: 17/03/2016. 65 p. Advisor: Marco Antonio
Casanova.
Abstract:
This dissertation discusses techniques to
automatically analyze, compare and recommend conferences, using bibliographic
data, outlines an implementation of the proposed techniques and describes
experiments with data extracted from a triplified version of the DBLP
repository. Conference analysis applies statistical and social network analysis
measures to the co-authorship network. The techniques for comparing conferences
explore familiar similarity measures, such as the Jaccard similarity
coefficient, the Pearson correlation similarity and the cosine similarity, and a
new measure, the co-authorship network communities similarity index. These
similarity measures are used to create a conference recommendation system based
on the Collaborative Filtering strategy. Finally, the work introduces two
techniques for recommending conferences to a given prospective author based on
the strategy of finding the most related authors in the co-authorship network.
The first alternative uses the Katz index, which can be quite costly for large
graphs, while the second one adopts an approximation of the Katz index, which
proved to be much faster to compute. The experiments suggest that the best
performing techniques are: the technique for comparing conferences that uses the
new similarity measure based on co-authorship communities; and the conference
recommendation technique that explores the most related authors in the
co-authorship network.
[16_MSc_descragnolle-taunay]
Henrique
D'ESCRAGNOLLE-TAUNAY.
A spatial partitioning heuristic for automatic adjustment
of the 3D navigation speed in multiscale virtual environments.
[Title in English:
Streamline tracing for oil natural reservoirs based on adaptive numerical
methods].
MSc. Diss. Eng. Presentation: 04/03/2016. 47 p. Advisor: Alberto Barbosa Raposo.
Abstract:
With technological evolution, 3D virtual environments continuously increase in
complexity; such is the case with multiscale environments, i.e., environments
that contain groups of objects with extremely diverging levels of scale. Such
scale variation makes it difficult to interactively navigate in this kind of
environment since it demands repetitive and intuitive adjustments in their
velocity or scale, according to the objects that are close to the observer, in
order to ensure a comfortable and stable navigation. Recent effort have been
developed working with heavy GPU based solutions that are not feasible depending
of the complexity of the scene. We present a spatial partitioning heuristics for
automatic adjustment of 3D navigation speed in a multiscale virtual environment
minimizing the workload and transferring it to the CPU, allowing the GPU to
focus on rendering. Our proposal describes a geometrical strategy during
the preprocessing phase that allows us to estimate in real-time phase which is
the shortest distance between the observer and the object nearest to him. From
this unique information, we are capable to automatically adjusting the speed of
navigation according to the characteristic scale of the region where the
observer is. With the scene topological information obtained in a
preprocessing phase, we are able to obtain, in real-time, the closest objects
and the visible objects, which allow us to propose two different heuristics for
automatic navigation velocity. Finally, in order to verify the usability gain in
the proposed approaches, user tests were conducted to evaluate the accuracy and
efficiency of the navigation, and users' subjective satisfaction. Results
where particularly significant for demonstrating accuracy gain in navigation
while using the proposed approaches for both laymen and advanced users.
[16_MSc_bistene]
Joanna Pivatelli BISTENE. A contratação de tecnologia da informação na
Administração Pública Federal: o caso do desenvolvimento de software sob demanda. [Title in
English: Information technology acquisition in Brazilian
Federal Government: the case of on-demand software development].
MSc. Diss. Eng. Presentation: 27/09/2016. 253 p. Advisor:
Julio Cesar Sampaio do Prado Leite.
Abstract:
Acquisition of Information Technology (IT) by the Brazilian Federal Government
is governed by law. In the specific case, the Law 8.666/1993 is intend to
establish the rules for such contracts, forcing their planning. The Requirements
Engineering literature emphasizes evolves in definition process but this is
often disregard. Therefore, exists a clear conflict in requirements definition
during the IT acquire in Brazilian Federal Government with current legislation.
Define requirements obligation before software procurement is impose by law and
can generate problems in contract management. This dichotomy among the
mutability requirements and legal rigidity in the procurement process had
inspired an exploratory research with public organizations. Our research provide
transparency in problems experienced by these agencies in procurement of IT
solutions. We prepared a preliminary analysis of these problems and pointed out
possible solutions.
[16_MSc_aguiar]
José Luiz do Nascimento AGUIAR.
Medidas de similiaridade entre séries
temporárias.
[Title in English: Time series similarity measuresa abordagem baseada em blueprints para priorização e classificação de
anomalias de código críticas].
MSc. Diss. Port. Presentation: 11/03/2016. 75 p. Advisor: Eduardo Sany Laber.
Abstract:
Nowadays a very important task in data mining is to understand how to collect
the most informative data in a very amount of data. Once every single feld of
knowledge have lots of data to summarize in the most representative information,
the time series approach is de nitely a very strong way to represent and collect
this information from it. On other hand we need to have an appropriate tool to
extract the most signifiant data from this time series. To help us we can use
some similarity methods to know how similar is one time series from another In
this work we will perform a research using some distance-based similarity
methods and apply it in some clustering algorithms to do an assessment to see if
there is a combination (distance-based similarity methods / clustering
algorithm) that present a better performance in relation with all the others
used in this work or if there exists one distancebased similarity method that
shows a better performance between the others.
[16_MSc_nascimento]
Leonardo Henrique Camello do NASCIMENTO.
Um estudo
de presença em uma aplicação de realidade virtual
para tratamento de pessoas com medo de voar.
[Title in English:
A
presence study in a virtual reality application for the treatment of people with
fear of flying].
MSc. Diss. Port. Presentation: 29/04/2016. 63 p. Advisor: Alberto Barbosa
Raposo.
Abstract:
Fear of flying is a real problem that affects 10% to 25% of the world’s
population. Approximately 25% of adults experience a significant increase in
their anxiety levels when required to take any type of air transport and 10% of
them avoid the situation. The approach that has proven to be the most effective
in the treatment of phobias is in vivo exposure. However, the
difficulty and the cost, and sometimes even the danger, of using real airplanes
and real flights to expose people with fear of flying to these stimuli have
daunted many researchers, therapists, and patients despite the prevalence and
the impact of the fear of flying. We present in this study a virtual reality
application that promotes a systematic exposure to the stimuli that
causes significant increase in anxiety levels related to fear of flying through
computer generated environments. This application uses the concept of immersion
through the Oculus Rift to promote an “almost real” experience to the patients.
To evaluate the proposed application, in special the “sense of presence” caused
by it, we obtained qualitative data from interviews and questionnaires with its
“meta-users”, i.e., the psychiatrists that will apply the treatment to their
patients.
[16_PhD_duarte]
Leonardo Seperuelo DUARTE.
TopSim: a plugin-based framework for large-scale
numerical analysis.
[Title in Portuguese:
TopSim: um sistema baseado em plugin para análise numérica em larga escala].
Ph.D. Thesis. Eng. Presentation: 09/09/2016. 91 p. Advisor: Waldemar Celes
Filho.
Abstract:
Computational methods in engineering are used to solve physical problems that do
not have analytical solution or their perfect mathematical representation is
unfeasible. Numerical techniques, including the largely used finite element
method, require the solution of linear systems with hundreds of thousands
equations, demanding high computational resources (memory and time). In this
thesis, we present a plugin-based framework for large-scale numerical analysis.
The framework is used as an original tool to solve topology optimization
problems using the finite element method with millions of elements. Our strategy
uses an element-by-element technique to implement a highly parallel code for an
iterative solver with low memory consumption. Besides, the plugin approach
provides a fully flexible and easy to extend environment, where different types
of applications, requiring different types of finite elements, materials, linear
solvers, and formulations, can be developed and improved. The kernel of the
framework is minimum with only a plugin manager module, responsible to load the
desired plugins during runtime using an input configuration file. All the
features required for a specific application are defined inside plugins, with no
need to change the kernel. Plugins may provide or require additional specialized
interfaces, where other plugins may be connected to compose a more complex and
complete system. We present results for a structural linear elastic static
analysis and for a structural topology optimization analysis. The simulations
use elements Q4, hexahedron (Brick8), and hexagonal prism (Honeycomb), with
direct and iterative solvers using sequential, parallel and distributed
computing. We investigate the performance regarding the use of memory and the
scalability of the solution for problems with different sizes, from small to
very large examples on a single machine and on a cluster. We simulated a linear
elastic static example with 500 million elements on 300 machines.
[16_MSc_millan]
Liander MILLÁN FERNÁNDEZ.
Concurrent programming in Lua - revisiting the Luaproc
library.
[Title in Portuguese:
TopSim: um sistema baseado em plugin para análise numérica em larga escala].
M. Sc. Diss. Eng. Presentation: 16/12/2016. 68 p. Advisor: Noemi da La Roque
Rodriguez.
Abstract: In recent years, the tendency to increase the performance of a
microprocessor, as an alternative solution to the increasing demand for
computational resources of both applications and systems, has decreased
significantly. This has led to an increase of the interest in employing
multiprocessing environments. Although many models and libraries have been
developed to offer support for concurrent programming, ensuring that several
execution ows access shared resources in a controlled way remains a complex
task. The Luaproc library, which provides support for concurrency in Lua, has
shown some promise in terms of performance and cases of use. In this thesis, we
study the Luaproc library and incorporate to it new functionalities in order to
make it more user friendly and extend its use to new scenarios. First, we
introduce the motivations to our extensions to Luaproc, discussing alternative
ways of dealing with the existing limitations. Then, we present requirements,
characteristics of the implementation, and limitations associated to each of the
mechanisms implemented as alternative solutions to these limitations. Finally,
we employ the incorporated functionalities in implementing some concurrent
applications, in order to evaluate the performance and test the proper
functioning of such mechanisms.
[16_MSc_talavera]
Luis Eduardo TALAVERA RIOS.
An
energy-aware
IoT gateway, with continuous processing of sensor
data.
[Title in Portuguese: Um enegy-aware IoT gateway,
com processamento contínuo de dados de sensor].
MSc. Diss. Eng. Presentation: 16/03/2016. 73 p. Advisor: Markus Endler.
Abstract:
Few studies have investigated and proposed a middleware solution for the
Internet of Mobile Things (IoMT), where the smart things (Smart Objects) can be
moved, or else can move autonomously, but remain accessible from any other
computer over the Internet. In this context, there is a need for energy-ecient
gateways to provide connectivity to a great variety of Smart Objects. Proposed
solutions have shown that mobile devices (smartphones and tablets) are a good
option to become the universal intermediates by providing a connection point to
nearby Smart Objects with short-range communication technologies. However, they
only focus on the transmission of raw sensor data (obtained from connected Smart
Objects) to the cloud where processing (e.g. aggregation) is performed. Internet
Communication is a strong battery-draining activity for mobile devices;
moreover, bandwidth may not be sucient when large amounts of information is
being received from the Smart Objects. Hence, we argue that some of the
processing should be pushed as close as possible to the sources. In this regard,
Complex Event Processing (CEP) is often used for real-time processing of
heterogeneous data and could be a key technology to be included in the gateways.
It allows a way to describe the processing as expressive queries that can be
dynamically deployed or removed on-the-y. Thus, being suitable for applications
that have to deal with dynamic adaptation of local processing. This dissertation
describes an extension of a mobile middleware with the inclusion of continuous
processing of sensor data, its design and prototype implementation for Android.
Experiments have shown that our implementation delivers good reduction in energy
and bandwidth consumption.
[16_MSc_netto]
[16_MSc_silva]
[16_MSc_mota]
[16_MSc_martins]
[16_MSc_silva]
[16_PhD_ferreira]
[16_MSc_masson]
[16_PhD_viana]
[16_MSc_teixeira]
[16_PhD_ribeiro]
[16_MSc_souzafilho] *
[16_MSc_dunker]
[16_PhD_nasser]
[16_MSc_portugal]
[16_MSc_pinheiro]
[16_MSc_fiolgonzalez]
[16_MSc_silvaneto]
Abstract: In this project, inspired by the fields of Pedagogy and
Entertainment, we aim to develop a digital games development framework in order
to facilitate the creation of educational games of the sub-genre JRPG (Japanese
Role-Playing Games), more interesting than the majority of educational games
available for now. The RPG genre is, by definition, based in storytelling and
role-playing principles, identified by the literature as important tools that
stimulates the students imagination, engage them emotionally and arouse their
interests for the traditional educational program. The subgenre JRPG, in turn,
represents a special category of eletronic RPGs that inherit those same
educational principles, but have well defined delimitations in respect of game
mechanics and artistic identity. These delimitations are positive in a sense
that they work as guidelines for the development process of this kind of games.
[16_PhD_segura]
[16_MSc_costa]
[16_MSc_barroso]
Luiz Felipe NETTO. Algoritmo de
corte com preservação de contexto para visualização de modelos de reservatório.
[Title in English: Cutaway algorithm with context
preservation for reservoir model visualization].
MSc. Diss. Port. Presentation: 16/09/2016. 71 p. Advisor: Waldemar Celes Filho.
Abstract: Numerical simulation of black oil reservoir is widely used in the
oil and gas industry. The reservoir is represented by a model of hexahedral
cells with associated properties, and the numerical simulation is used to
predict the fluid behavior in the model. Specialists make analysis of such
simulations by inspecting, in a graphical interactive environment, the
tridimensional model. In this work, we propose a new cutaway algorithm with
context preservation to help the inspection of the model. The main goal is to
allow the specialist to visualize the wells and their vicinity. The wells
represent the object of interest that must be visible while preserving the
tridimensional model (the context) in the vicinity as far as possible. In this
way, it is possible to visualize the distribution of cell property together with
the object of interest. The proposed algorithm makes use of graphics processing
units and is valid for arbitrary objects of interest. It is also proposed an
extension to the algorithm to allow the cut section to be decoupled from the
camera, allowing analysis of the cut model from different points of view. The
effectiveness of the proposed algorithm is demonstrated by a set of results
based on actual reservoir models.
Luiz José Schirmer SILVA.
CrimeVis: an interactive
visualization system for analyzing criminal data in the state of Rio de Janeiro.
[Title in Portuguese: CrimeVis: um
sistema interativo de visualização para análise de dados criminais do estado do
Rio de Janeiro].
MSc. Diss. Eng. Presentation: 02/06/2016. 53 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: This work presents the development of an interactive graphic
visualization system for analyzing criminal data in the State of Rio de Janeiro,
provided by the Public Safety Institute from the State of Rio de Janeiro
(ISP-RJ, Instituto de Segurança Pública). The system presents to the user a set
of integrated tools that support visualizing and analyzing statistical data on
crimes, which make it possible to infer relevant information regarding
government policies on public safety and their effects. The tools allow us to
visualize multidimensional data, spatiotemporal data, and multivariate data in
an integrated manner using brushing and linking techniques. The work also
presents a case study to evaluate the set of tools we developed.
Marcelo Garnier MOTA.
Exploring structured information retrieval for bug localization in C# software
projects.
[Title in Portuguese: Explorando
recuperação de informação estruturada para localização de defeitos em projetos
de software C#].
MSc. Diss. Eng. Presentation: 16/09/2016. 91 p. Advisor: Alessandro Fabrício
Garcia.
Abstract: Software projects can grow very
rapidly, reaching hundreds or thousands of files in a relatively short time
span. Therefore, manually finding the source code parts that should be changed
in order to fix a bug is a difficult task. Static bug localization techniques
provide effective means of finding files related to a bug. Recently, structured
information retrieval has been used to improve the effectiveness of static bug
localization, being successfully applied by techniques such as BLUiR, BLUiR+,
and AmaLgam. However, there are significant shortcomings on how these techniques
were evaluated. BLUiR, BLUiR+, and AmaLgam were tested only with four projects,
all of them structured with the same language, namely, Java. Moreover, the
evaluations of these techniques (i) did not consider appropriate program
versions, (ii) included bug reports that already mentioned the bug location, and
(iii) did not exclude spurious files, such as test files. These shortcomings
suggest the actual effectiveness of these techniques may be lower than reported
in recent studies. Furthermore, there is limited knowledge on whether and how
the effectiveness of these state-of-the-art techniques can be improved. In this
dissertation, we evaluate the three aforementioned techniques on 20 open-source
C# software projects, providing a rigorous assessment of the effectiveness of
these techniques on a previously untested object-oriented language. Moreover, we
address the simplistic assumptions commonly present in bug localization studies,
thereby providing evidence on how their findings may be biased. Finally, we
study the contribution of different program construct types to bug localization.
This is a key aspect of how structured information retrieval is applied in bug
localization. Therefore, understanding how each construct type influences bug
localization may lead to effectiveness improvements in projects structured with
a specific programming language, such as C#.
Marcelo Malta Rodrigues MARTINS.
Strong lower bounds for the CVRP via column and
cut generation.
[Title in Portuguese: Limites inferiores fortes para o CVRP via geração
de colunas e cortes].
MSc. Diss. Eng. Presentation: 18/01/2016. 67 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão.
Abstract: The Capacitated Vehicle Routing Problem (CVRP) is the seminal
version of the vehicle routing problem, a classical problem in Operational
Research. Introduced by Dantzig e Ramser, the CVRP generalizes the Traveling
Salesman Problem (TSP) and the Bin Packing Problem (BPP). In addition, routing
problems arise in several real world applications, often in the context of
reducing costs, polluent emissions or energy within transportation activities.
In fact, the cost with transportation can be roughly estimated to represent 5%
to 20% of the overall cost of a delivered product. This means that any saving in
routing can be much relevant. The CVRP is stated as follows: given a set of n +
1 locations – a depot and n customers – the distances between every pair of
locations, integer demands associated with each customer, and a vehicle
capacity, we are interested in determining the set of routes that start at the
depot, visits each customer exactly once and returns to the depot. The total
distance traveled by the routes should be minimized and the sum of the demands
of customers on each route should not exceed the vehicle capacity. This work
considers that the number of available vehicles is given. State of the art
algorithms for finding and proving optimal solutions for the CVRP compute their
lower bounds through column generation and improving it by adding cutting
planes. The columns generated may be elementary routes, where customers are
visited only once, or relaxations such as q-routes and the more recent
ng-routes, which differ on how they allow repeating customers along the routes.
Cuts may be classified as robust, those that are defined over arc variables, and
non-robust (or strong), those that are defined over the column generation master
problem variables. The term robust used above refers to how adding the cut
modifies the efficiency of solving the pricing problem. Besides the description
above, the most efficient exact algorithms for the CVRP use too many elements
turning its replication a hard long task. The objective of this work is to
determine how good can be lower bounds computed by a column generation algorithm
on ng-routes using only capacity cuts and a family of strong cuts, the limited
memory subset row cuts. We assess the leverage achieved with the consideration
of this kind of strong cuts and its combination with others techniques like
Decremental Space State Relaxation (DSSR), Completion Bounds, ng-Routes and
Capacity Cuts over a Set Partitioning formulation of the problem. Extensive
computational experiments are presented along with an analysis of the results
obtained.
Marcos Vinícius Marques da SILVA.
VelvetH-DB:uma abordagem robusta de banco de dados no processo de montagem de
fragmentos de sequências biológicas.
[Title in English: VelvetH-DB: a robust database approach for the
assembly process of biological sequences].
MSc. Diss. Port. Presentation: 30/03/2016. 66 p. Advisor: Sergio Lifschitz.
Abstract: Recent technological advances, both in assembly algorithms and in
sequencing methods, have enabled the reconstruction of whole DNA even without a
reference genome available. The assembly of the complete chain involves reading
a large volume of genome fragments, called short-reads, which makes the problem
a significant computational challenge. A major bottleneck for all existing
fragment assembly algorithms is the high consumption of RAM. This dissertation
intends to study the implementation of one of these algorithms, called Velvet,
which is widely used and recommended. The same possessed a module, VelvetH that
performs a pre-processing data with the aim of reducing the consumption of main
memory. After a thorough study of code improvements and alternatives, specific
changes have been made and proposed a solution with data persistence in
secondary memory in order to obtain effectiveness and robustness.
Marilia Guterres FERREIRA.
Anticipating change in software
systems supporting organizational Information systems using an organizational
based strategic perspective. [Title in Portuguese:
Antecipando mudanças em sistemas de software que suportam
Sistemas de Informação Organizacionais usando uma perspectiva estratégica
baseada em organizações].
Ph.D. Thesis. Eng. Presentation: 18/10/2016. 240 p. Advisor: Julio Cesar Sampaio
do Prado Leite.
Abstract: Keeping organizations and their Software Systems
supporting Organizational Information Systems (SSsOIS) aligned over time is a
complex endeavour. We believe understanding the organizational dynamics of
changes, and of the impacts these changes might cause, can support the evolution
of SSsOIS. Yet, reasoning about the organizational changes in advance also
supports the development of an SSsOIS more likely to be aligned to the dynamics
of the organization. Based on it, we ground our work on strategic management
theory, which reasons about possible futures of the organization and formulates
strategies to achieve new goals in these possible futures. We propose to apply
the outcomes of strategic management to prepare SSsOIS for the future, i.e. to
prepare SSsOIS for these new requirements raised from the strategic plans. For
this, we use scenario planning as a tool to support key people in the
organization to think about multiple possible futures and plan strategies. In
order to keep the strategic planning of the organization aligned to the SSsOIS,
we propose an Organizational Dynamics-based Approach for Requirements
Elicitation (ODA4RE) composed by a scenario-based strategic planning (SSP),
organizational impact analysis (OIA), and validation of the likely SSsOIS’
requirements (LSRV). OIA also introduces an organizational dynamics metamodel
(ODMM) on which to base the reasoning, and an organizational dynamics questions
set (ODQS) to explore likely organizational impacts. We evaluate our proposal in
four empirical studies with different purposes: first in an academic
organization in Rio de Janeiro to analyse specifically the SSP, second in a
workshop to evaluate the ODMM’s expressiveness, third in a Post Office branch in
London to analyse OIA, and finally the entire approach at a Brazilian research
university. Results show contributions towards SSsOIS’ requirements evolution as
they align with the organization plans.
Matheus Manhães MASSON.
Cold Start em recomendação de músicas utilizando deep learning.
[Title in English: Cold Start in music recommendation using deep learning].
MSc. Diss. Eng. Presentation: 23/08/2016. 61 p. Advisor: Ruy Luiz Milidiú.
Abstract: Recomender system are used to provide information or Products by
learnig the profile of their users automatically using Machine Learning
techniques. Tipically these systems are based on previously collected data on
their products and users. When there are no previus data these systems do not
work; this problem is called Cold Start Problem. This work is focused on the
Cold Start Problem that affects the quality of the recommendations and the
failure to recomend new songs by methods traditionally used. For this solution
we use deep learning with Audio of the Songs and thus extract useful features to
recommend. From Latent Factors obtained by Matrix Factorization a convolutional
Neural network is trained to learn these factors using the Audio. Thus the
network can be used to predict Latent Factors of Songs using only the audio
without the need of previous data. This becomes a viable solution to the
Cold Start Problem. The result shows that this is a workable solution to the
problem even if they did not reach the best metrics of traditional methods. The
Convolutional Network trained learns from the Audio and preditcs factors. Thus
the result allows to recommend new songs and may even increase recommendation
using hybrid methods.
Marx Leles VIANA.
Design e implementação de agentes de
software adaptativos normativos.
[Title in English: Design and implementation of adaptive normative
software agents].
Ph. D. Thesis. Port. Presentation: 05/12/2016. 157 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: Multi-agent systems have been introduced as a new paradigm for
conceptualizing, designing and implementing software systems that are becoming
increasingly complex, open, distributed, dynamic, autonomous and highly
interactive. However, agent-oriented software engineering has not been widely
adopted, mainly due to lack of modeling languages that are expressive and
comprehensive enough to represent relevant agent-related abstractions and
support the refinement of design models into code. Most modeling languages do
not define how these abstractions interact at runtime, but many software
applications need to adapt their behavior, react to changes in their
environments dynamically, and align with some form of individual or collective
normative application behavior (e.g., obligations, prohibitions and
permissions). In this paper, we propose a metamodel and an architecture approach
to developing adaptive normative agents. We believe the proposed approach will
advance the state of the art in agent systems so that software technologies for
dynamic, adaptive, norm-based applications can be designed and implemented.
Otávio Freitas TEIXEIRA. Auto-Sintonia para sistemas de
bancos de dados na nuvem.
[Title in English: Database self-tuning in the Cloud].
MSc. Diss. Eng. Presentation: 31/03/2016. 63 p. Advisor: Sergio Lifschitz.
Abstract: Cloud computing is changing the way users access and benefit from
computer services. A database manager is one of the main features of this new
working environment. However, large volumes of data must be properly managed and
made available, according to the fluctuations in workloads and function of new
and existing parameters. Because of dimensions problems in this new cloud
environment, it is very difficult to have a DBA who can manually manage,
maintain availability and performance acceptably. In particular, the necessity
of a tuning process automatic in the cloud system to meet contractual operation
requirements and the necessity of offering to the user resources as if they were
unlimited while with excellent performance. This thesis explains and compares
the activities of (self)-tuning database systems operating in conventional and
cloud environments, emphasizing the differences observed in the cloud service
provider's view and users in a context of DBaaS. In particular, it is proposed
to extend of tuning ontology in order to automate actions to tuning the Database
as a Service.
Paula Ceccon RIBEIRO.
Uncertainty Analysis of 2D vector
fields through the Helmholtz-Hodge Decomposition.
[Title in Portuguese: Análise de incertezas em campos vetoriais 2D com o
uso da Decomposição de Helmholtz-Hodge].
Ph.D. Thesis Eng. Presentation: 15/12/2016. 109 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: Vector eld plays an essential role in a large range of scientific
applications. They are commonly generated through computer simulations. Such
simulations may be a costly process because they usually require high
computational time. When researchers want to quantify the uncertainty in such
kind of applications, usually an ensemble of vector fields realizations are
generated, making the process much more expensive. The Helmholtz-
Hodge Decomposition is a very useful instrument for vector field interpretation
because it traditionally distinguishes conservative (rotational-free) components
from mass-preserving (divergence-free) components. In this work, we are going to
explore the applicability of such technique on the uncertainty analysis of
2-dimensional vector fields. First, we will present an approach of the use of
the Helmholtz-Hodge Decomposition as a basic tool for the analysis of a vector
field ensemble. Given a vector field ensemble E, we firstly obtain the
corresponding rotational-free, divergence-free and harmonic component ensembles
by applying the Natural Helmholtz-Hodge Decomposition to each vector field in E.
With these ensembles in hand, our proposal not only quantifies, via a
statistical analysis, how much each component ensemble is point-wisely
correlated to the original vector field ensemble, but it also allows to
investigate the uncertainty of rotational-free, divergence-free and harmonic
components separately. Then, we propose two techniques that jointly with the
Helmholtz-Hodge Decomposition stochastically generate vector fields from a
single realization. Finally, we propose a method to synthesize vector fields
from an ensemble, using both the Dimension Reduction and Inverse Projection
techniques. We test the proposed methods with synthetic vector fields as well as
with simulated vector fields.
Paulo Roberto Pereira de SOUZA FILHO.
Auxílio a
portabilidade de código em aplicações de alto desempenho. [Title in
English: Support for code portability in high performance computing applications].
MSc. Diss. Port. Presentation: 21/03/2016. 117 p. Advisor: Noemi da La Roque
Rodriguez.
Abstract: Today’s platforms are becoming increasingly heterogeneous. A given
platform may have many different computing elements in it: CPUs, coprocessors
and GPUs of various kinds. This work propose a way too keep some portion of code
portable without compromising the performance along different heterogeneous
platforms. We implemented the HLIB library that handles the preparation code
needed by heterogeneous computing, also this library transparently supports the
traditional homogeneous platform. To address multiple SIMD architectures we
implemented the OpenVec, a tool to help compiler to enable SIMD instructions.
This tool provides a set of portable SIMD intrinsics and C++ operators to get a
portable explicit vectorization, covering SIMD architectures from the last 17
years like ARM Neon, Intel SSE to AVX-512 and IBM Power8 Altivec+VSX. We
demonstrated the combination use of this strategy using both tools with petaflop
HPC applications.
Philip Kuster DUNKER.
Uma ferramenta de telepresença de baixo custo usando Oculus Rift:
desenvolvimento e avaliação num cenário de videoconferência.
[Title in English: A low-cost telepresence tool using Oculus Rift:
development and evaluation in a videoconference scenarium].
MSc. Diss. Port. Presentation: 06/04/2016. 70 p. Advisor: Alberto Barbosa
Raposo.
Abstract: Telepresence refers to a set of technologies that allows a person
to feel as if he is in a place other than his true location. Such equipment uses
an ordinary camera to film what is happening in an environment and transmits it
alive on televisions or monitors to the user in another environment. Sometimes
the cameras can be controlled through devices such as keyboards or joysticks.
This dissertation presents a tool composed of a head-mounted display (HMD), we
used the Oculus Rift DK1, integrated with a device called remote head, able to
film with a stereo camera and to transmit to the Oculus Rift the images in 3D.
At the same time, the HMD’s gyroscope captures the user's head orientation and
sends it to the remote head, which has servo motors able to rotate it in order
to allow the user to move the stereo camera without any additional device. The
project's goal is to provide the user an experience of immersive telepresence,
with a low cost and a simple interface. Some tests with users were performed and
indicated the benefit of the tool for videoconferencing.
Rafael Barbosa NASSER.
Uma plataforma na nuvem para armazenamento de dados georreferenciados de
mobilidade urbana. [Title in English: A cloud
computing platform for storing georeferenced mobility data].
MSc. Diss. Port. Presentation: 26/09/2016. 159 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: The quality of life in urban centers has been a concern for
governments, business and the resident population in general. Public
transportation services perform a central role in this discussion, since they
determine, especially for that layer of lower-income society, the time wasted
daily in their movements. In Brazilian cities, city buses are predominant in
public transportion. Users of this service - passengers - do not have updated
information of buses and lines. Offer this kind of information contributes to a
better everyday experience of this modal and therefore provides greater quality
of life for its users. In a broader view, the bus can be considered sensors that
enable the understanding of the patterns and identify anomalies in vehicle
traffic in urban areas, allowing benefits for the whole population. This work
presents a platform in the cloud computing environment that captures, enriches,
stores and makes available the data from GPS devices installed on buses,
allowing the extraction of knowledge from this valuable and voluminous set of
information. Experiments are performed with the buses of the Municipality of Rio
de Janeiro, with applications focused on passenger and society. The
methodologies, discussions and techniques used throughout the work can be reused
for different cities, modal and perspectives.
Roxana Lisette Quintanilha PORTUGAL.
Mineração de informação em linguagem
natural para apoiar a elicitação de requisitos. [Title in
English: Mining information
in natural language to
support requirements elicitation].
MSc. Diss. Port. Presentation: 19/04/2016. 96 p. Advisor: Julio Cesar Sampaio do
Prado Leite.
Abstract: This work describes the mining of information in natural
language from the GitHub repository. It is explained how the content of similar
projects, given a search domain, can be useful for the reuse of knowledge and
thus help in the Requirement Elicitation tasks. Techniques of text mining,
regularities independent from domain, and GitHub metadata are the method used to
select relevant projects and the information within them. One approach to
achieve our goal is explained with an exploratory research and the results
achieved.
Sasha Nicolas da Rocha PINHEIRO.
Calibração de câmera usando projeção
frontal-paralela e colinearidade dos pontos de controle. [Title in
English: Camera calibration using fronto parallel projection and collinearity of
control points].
MSc. Diss. Port. Presentation: 01/09/2016. 63 p. Advisor: Alberto Barbosa
Raposo.
Abstract: Crucial for any computer vision or augmented reality
application, the camera calibration is the process in which one gets the
intrinsics and the extrinsics parameters of a camera, such as focal length,
principal point and distortions values. Nowadays, the most used method to deploy
the calibration comprises the use of images of a planar pattern in different
perspectives, in order to extract control points to set up a system of linear
equations whose solution represents the camera parameters, followed by an
optimization based on the 2D reprojection error. In this work, the ring
calibration pattern was chosen because it offers higher accuracy on the
detection of control points. Upon application of techniques such as
fronto-parallel transformation, iterative refinement of the control points and
adaptative segmentation of ellipses, our approach has reached improvements in
the result of the calibration process. Furthermore, we proposed extend the
optimization model by modifying the objective function, regarding not only the
2D reprojection error but also the 2D collinearity error.
Sonia FIOL GONZÁLEZ.
A novel committee-based
clustering method. [Title in
Portuguese: Um novo método de agrupamento baseado em comitê].
MSc. Diss. Eng. Presentation: 15/09/2016. 70 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: In data analysis, in the process of quantitative modeling
and in the construction of decision support models, clustering, classification
and information retrieval algorithms are very useful. For these algorithms it is
crucial to determine the relevant features in the original dataset. To deal with
this problem, techniques for feature selection play an important role. Moreover,
it is recognized that in unsupervised learning tasks it is also diffcult to
define the correct number of clusters. This research proposes a method based on
ensemble methods using all features from a dataset and varying the number of
clusters to calculate the similarity matrix between any two instances of the
dataset. Each element in this matrix stores the probability of the corresponding
instances to be in the same cluster in these multiple scenarios. Notice that the
similarity matrix might be transformed to a distance matrix to be used in other
clustering methods. The experiments were made with a real-world dataset of the
crimes in Rio de Janeiro Capital showing the effectiveness of the proposed
technique.
Vicente Correa SILVA NETO.
Uma plataforma de jogos JRPG destinada à educação com entretenimento. [Title in
English: A JRPG game platform with the purpose of education and entertainment].
MSc. Diss. Port. Presentation: 22/09/2016. 82 p. Advisor: Waldemar Celes Filho.
Vinicius Costa Villas Bôas SEGURA.
BONNIE: Building Online Narratives from Noteworthy Interaction Events. [Title in
Portuguese: BONNIE: Construindo narrativas online a partir de eventos de
interacão relevantes].
MSc. Diss. Eng. Presentation: 26/09/2016. 154 p. Advisor: Simone Diniz Junqueira
Barbosa.
Abstract: Nowadays, we have access to data of unprecedentedly large size,
high dimensionality, and complexity. To extract unknown and unexpected
information from such complex and dynamic data, we need effective and efficient
strategies. One such strategy is to combine data analysis and visualization
techniques, which is the essence of visual analytics applications. After the
knowledge discovery process, a major challenge is to filter the essential
information that led to a discovery and to communicate the findings to other
people. We propose to take advantage of the trace left by the exploratory data
analysis, in the form of user interaction history, to aid in this process. With
the trace, the user can choose the desired interaction steps and create a
narrative, sharing the acquired knowledge with readers. To achieve our goal, we
have developed the BONNIE (Building Online Narratives from Noteworthy
Interaction Events) framework. The framework comprises a log model to register
the interaction events, auxiliary code to help the developer instrument his or
her own code, and an environment to view the user's own interaction history and
build narratives. This thesis presents our proposal for communicating
discoveries in visual analytics applications, the BONNIE framework, and a few
empirical studies we conducted to evaluate our solution.
Vinícius de Lima COSTA.
Uma ferramenta de RV para tratamento
de fobia de voar controlada pelo terapeuta. [Title in
English: A VR
tool for fear of flying treatment controlled by the therapist].
MSc. Diss. Port. Presentation: 27/10/2016. 47 p. Advisor: Alberto Barbosa
Raposo.
Abstract: The problem known as fear of flying is common nowadays. Also known
by other names such as aviophobia or aerophobia, this kind of fear can be
defined as “ a specific phobia noted by a persistent excessive fear of
travelling or possibility of travel through the air”. Many people suffer from
this kind of phobia, creating a high demand for treatments in this area. The
most effective way to treat someone is by in vivo exposition. However, this kind
of treatment is usually expensive, since there is a need to go to an airport and
to get aboard a plane. At the end, the patient may not even try to go through
with the flight because of his/her excessive fear. The present work focuses on
creating a 3D virtual reality flight simulator, from the passenger point of
view. In addition to this simulator, there is also a mobile application that
controls the current state of the main application and the stimulus that can be
passed to the patient without interrupting the immersion on the main
application. The effectiveness of the virtual reality application in
transmitting the sense of fear and the effectiveness of the mobile application
were evaluated with the help of psychiatrists from IPUB/UFRJ and a pilot test,
plus a presentation to PUC-Rio psychiatrists.
Yanely Milanés BARROSO.
Structured learning with incremental feature induction
and selection for Portuguese dependency parsing. [Title in
Portuguese: Aprendizado estruturado com indução e seleção incrementais de
atributos para análise de dependência em Português].
MSc. Diss. Eng. Presentation: 09/03/2016. 92 p. Advisor: Ruy Luiz Milidiú.
Abstract: Natural language processing requires solving several tasks of
increasing complexity, which involve learning to associate structures like
graphs and sequences to a given text. For instance, dependency parsing involves
learning of a tree that describes the dependency-based syntactic structure of a
given sentence. A widely used method to improve domain knowledge representation
in this task is to consider combinations of features, called templates, which
are used to encode useful information with nonlinear pattern. The total number
of all possible feature combinations for a given template grows exponentialy in
the number of features and can result in computational intractability. Also,
from an statistical point of view, it can lead to overfitting. In this scenario,
it is required a technique that avoids over fitting and that reduces the feature
set. A very common approach to solve this task is based on scoring a parse tree,
using a linear function of a defined set of features. It is well known that
sparse linear models simultaneously address the feature selection problem and
the estimation of a linear model, by combining a small subset of available
features. In this case, sparseness helps control over fitting and performs the
selection of the most informative features, which reduces the feature set. Due
to its exibility, robustness and simplicity, the perceptron algorithm is one of
the most popular linear discriminant methods used to learn such complex
representations. This algorithm can be modified to produce sparse models and to
handle nonlinear features. We propose the incremental learning of the
combination of a sparse linear model with an induction procedure of non-linear
variables in a structured prediction scenario. The sparse linear model is
obtained through a modifications of the perceptron algorithm. The induction
method is the Entropy-Guided Feature Generation. The empirical evaluation is
performed using the Portuguese Dependency Parsing data set from the CoNLL 2006
Shared Task. The resulting parser attains 92.98% of accuracy, which is a
competitive performance when compared against the state-of-art systems. On its
regularized version, it accomplishes an accuracy of 92.83%, shows a striking
reduction of 96.17% in the number of binary features and reduces the learning
time in almost 90%, when compared to its non regularized version.