Theses and Dissertations
2015
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2015. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 21/DECEMBER/2016
[In construction; sometimes digital versions may not be available yet]
[15_PhD_branco]
Adriano
Francisco BRANCO.
Scripting customized components for Wireless Sensor Networks. [Title in
Portuguese: Programando redes de sensores sem fio com scripts sobre componentes
customizados].
Ph.D. Thesis. Eng. Presentation: 10/09/2015. 100 p. Advisor: Noemi da La Roque
Rodriguez.
Abstract: Programming wireless sensors networks (WSN) is a dicult task. The
programmer must deal with several concurrent activities in an environment with
severely limited resources. In this work we propose a programming model to
facilitate this task. The model we propose combines the use of configurable
component-based virtual machines with a reactive scripting language which can be
statically analyzed to avoid unbounded execution and memory conflicts. This
approach allows the exibility of remotely uploading code on motes to be combined
with a set of guarantees for the programmer. The choice of the specific set of
components in a virtual machine configuration defines the abstraction level seen
by the application script. To evaluate this model, we built Terra, a system
combining the scripting language Céu-T with the Terra virtual machine and a
library of components. We designed this library taking into account the
functionalities commonly needed in WSN applications - typically for sense and
control. We implemented different applications using Terra and using an
event-driven language based on C and we discuss the advantages and disadvantages
of the alternative implementations. Finally, we also evaluate Terra by measuring
its overhead in a basic application and discussing its use and cost in different
WSN scenarios.
[15_PhD_skyrme]
Alexandre
Rupert Arpini SKYRME.
Safe record
sharing in dynamic programming
languages. [Title in Portuguese:
Compartilhamento seguro de registros em linguages de programação dinâmicas].
Ph.D. Thesis. Eng. Presentation: 17/03/2015. 77 p. Advisors: Noemi de La Rocque
Rodriguez and Roberto Ierusalimschy.
Abstract:
Dynamic programming languages have become increasingly popular and have been
used to implement a range of applications. Meanwhile, multicore processors have
become the norm, even for desktop computers and mobile devices. Therefore,
programmers must turn to parallelism as a means to improve performance. However,
concurrent programming remains difficult. Besides, despite improvements in
static languages, we find dynamic languages are still lacking in concurrency
support. In this thesis, we argue that the main problem with concurrent
programming is unpredictability unexpected program behaviors, such as returning
out-of-thin-air values. We observe that unpredictability is most likely to
happen when shared memory is used. Consequently, we propose a concurrency
communication model to discipline shared memory in dynamic languages. The model
is based on the emerging concurrency patterns of not sharing data by default,
data immutability, and types and effects (which we turn into capabilities). It
mandates the use of shareable objects to share data. Besides, it establishes
that the only means to share a shareable object is to use message passing.
Shareable objects can be shared as read-write or read-only, which allows both
individual read-write access and parallel read-only access to data. We
implemented a prototype in Lua, called luashare, to experiment with the model in
practice, as well as to carry out a general performance evaluation. The
evaluation showed us that safe data sharing makes it easier to allow for
communication among threads. Besides, there are situations where copying data
around is simply not an option. However, enforcing control over shareable
objects has a performance cost, in particular when working with nested objects.
[15_MSc_villarmosa]
Alexandre Menezes VILLARMOSA.
Modelagem de agentes computacionais baseada na doutrina militar. [Title in English:
Computational modeling agents based on military doctrine].
M.Sc. Diss. Port. Presentation: 14/07/2015. 84 p. Advisor: Marcelo Gattass.
Abstract:
Since the beginning of nineteenth century combat simulations are used in
military training. It’s necessary to involve lots of military to these trainings
occur reliably. In the late 1940s the idea of computational agents was developed
in artificial intelligence and showed as an excellent tool to reduce the amount
of personnel involved in combat simulations. Agents perceive the environment
where they are inserted and take actions upon it following a set of rules. That
reminds the behavior of a soldier. A soldier, or a group of then, perceive the
battlefield and take a series of actions based on military doctrine. Therefore,
the scope of this work is to present a viable way to define the behavior of
computational agents based on military doctrine, so that they can replace some
of the personnel involved in a combat simulation without affecting the
reliability of the training in course. In addition making more efficient
simulation systems, reducing the amount of required military for its proper
implementation, can also help to check the logical consistency of the actions
planned in the doctrinal manuals.
[15_MSc_moreira]
André de
Souza MOREIRA.
Engenharia
reversa em modelos CAD utilizando descritores de forma e máquina
de vetores de suporte. [Title in English:
Reverse engineering for CAD models using shape descriptors and support vector
machine].
M.Sc. Diss. Port. Presentation: 27/03/2015. 72 p. Advisor: Marcelo Gattass.
Abstract:
3D CAD Models have played an important role in engineering projects’ management.
It is noticeable in many of these files the presence of several objects with
implicit representation that end up being represented as triangular meshes.
Although suitable for rendering, the triangular mesh representation brings some
drawbacks, such as the ambiguity in objects with low discretization rate. The
reverse engineering aims to reconstruct this discrete representation to its
original continuous representation. In this work, we propose a novel methodology
for geometry reconstruction in CAD models using Support Vector Machines and
Shape Descriptors.
[15_PhD_maidl]
André Murbach
MAIDL.
Typed Lua: an optional type system for LUA. [Title in English:
Typed Lua: um
sistema de tipos opcional para Lua].
Ph. D. Thesis. Port. Presentation: 10/04/2015. 149 p. Advisors: Roberto
Ierusalimschy and Fábio Mascarenhas de Queiroz (UFRJ).
Abstract:
Dynamically
typed languages such as Lua avoid static types in favor of simplicity and
exibility, because the absence of static types means that programmers do not
need to bother with abstracting types that should be validated by a type
checker. In contrast, statically typed languages provide the early detection of
many bugs, and a better framework for structuring large programs. These are two
advantages of static typing that may lead programmers to migrate from a
dynamically typed to a statically typed language, when their simple scripts
evolve into complex programs. Optional type systems allow combining dynamic and
static typing in the same language, without affecting its original semantics,
making easier this code evolution from dynamic to static typing. Designing an
optional type system for a dynamically typed language is challenging, as it
should feel natural to programmers that are already familiar with this language.
In this work we present and formalize the design of Typed Lua, an optional type
system for Lua that introduces novel features to statically type check some Lua
idioms and features. Even though Lua shares several characteristics with other
dynamically typed languages such as JavaScript, Lua also has several unusual
features that are not present in the type system of these languages. These
features include functions with exible arity, multiple assignment, functions
that are overloaded on the number of return values, and the incremental
evolution of record and object types. We discuss how Typed Lua handles these
features and our design decisions. Finally, we present the evaluation results
that we achieved while using Typed Lua to type existing Lua code.
[15_PhD_monteiro]
Andrei Alhadeff MONTEIRO.
Mapping cohesive fracture and fragmentation simulations to GPUs. [Title in Portuguese:
Compartilhamento seguro de registros em linguages de programação dinâmicas].
Ph.D. Thesis. Eng. Presentation: 16/09/2015. 156 p. Advisor: Waldemar Celes
Filho.
Abstract: A GPU-based computational framework is presented to deal with
dynamic failure events simulated by means of cohesive zone elements. We employ a
novel and simplified topological data structure relative to CPU implementation
and specialized for meshes with triangles or tetrahedra, designed to run
efficiently and minimize memory requirements on the GPU. We present a parallel,
adaptive and distributed explicit dynamics code that implements an extrinsic
cohesive zone formulation where the elements are inserted “on-the-fly”, when
needed and where needed. The main challenge for implementing a GPU-based
computational framework using an extrinsic cohesive zone formulation resides on
being able to dynamically adapt the mesh, in a consistent way, by inserting
cohesive elements on fractured facets and inserting or removing bulk elements
and nodes in the adaptive mesh modification case. We present a strategy to
refine and coarsen the mesh to handle dynamic mesh modification simulations on
the GPU. We use a reduced scale version of the experimental specimen in the
adaptive fracture simulations to demonstrate the impact of variation in floating
point operations on the final fracture pattern. A novel strategy to duplicate
ghost nodes when distributing the simulation in different compute nodes
containing one GPU each is also presented. Results from parallel simulations
show an increase in performance when adopting strategies such as distributing
different jobs amongst threads for the same element and launching many threads
per element. To avoid concurrency on accessing shared entities, we employ graph
coloring for non-adaptive meshes and node traversal for the adaptive case.
Experiments show that GPU efficiency increases with the number of nodes and bulk
elements
[15_MSc_lutfi]
Antonio
Nascimento LUTFI.
Um
framework para game shows interativos de TV com realidade aumentada e segunda
tela. [Title in English:
A
framework for TV game shows with second screen].
M.Sc. Diss. Port. Presentation: 17/04/2015. 94 p. Advisor: Bruno Féijo.
Abstract:
Presents a
framework for the development of interactive TV game shows using augmented
reality in TV studios, which allows the participation of viewers using tablets
and smartphones as the second screen. This research also investigates new
convergence paradigms between TV and video games.
[15_MSc_escobarendara]
Ariel
ESCOBAR ENDARA.
CubMed:
um framework para a criação de aplicações de assistência
médica ubíqua baseado em agentes de software colaborativos. [Title in
English:
CubMed: a framework for the creation of ubiquitous medical assistance
applications based on collaborative software agents].
M.Sc. Diss. Port. Presentation: 28/08/2015. 90 p. Advisor: Carlos José Pereira
de Lucena.
Abstract:
The health area needs to deal with various
problems related to issues of infrastructure, lack of qualified personnel and a
large number of patients. As a solution to problems of this nature, u-Healthcare
was created as an application of the concepts of Ubiquitous Computing (UbiComp)
in the area of health care. u-Healthcare allows health monitoring at any time
and place from electronic devices connected to the Internet. However, the
expansion of health monitoring for an ubiquitous environment cannot be performed
with protocols and procedures currently used, since this approach would
drastically increase the consumption of time and resources. For that reason, the
development of tools to provide health services can be supported in research
areas such as Multi-Agent System (MAS) and Computer Supported Cooperative Work
(CSCW). In that sense, MAS can be used to automate processes through the
properties of software agents. On the other hand CSCW gives the possibility of
establishing a model of cooperation among the participants on the application.
Based on these aspects, this work proposes the modeling and development of a
framework capable of providing support and help on the construction of dedicated
u-Healthcare applications which should be based on the concepts of MAS and CSCW.
To illustrate the use of the framework, there are presented two scenarios of
use. The first scenario corresponds to a fetal monitoring system, which allows
early detection of fetal abnormalities. The second scenario consists of a drug
administration assistant, which allows the doctor to control drug use by his
patients.
[15_PhD_baffa]
Augusto
Cesar Espídola BAFFA.
Storytelling based on audience social interaction. [Title in Portuguese:
Storytelling baseado na interação social da audiência].
Ph.D. Thesis. Port. Presentation: 07/07/2015. 91 p. Advisor: Bruno Féijo.
Abstract:
To tell a
story, the storyteller uses all his/her skills to entertain an audience. This
task not only relies on the act of telling a story, but also on the ability to
understand reactions of the audience during the telling of the story. It is not
so difficult to adapt a story for a single individual based on his/her
preferences and previous choices. However, the task of choosing what is best for
a group becomes quite complicated. The selection by majority voting cannot be
effective because it can discard alternatives that are secondary for some
individuals, but that would work better for the group in question. Thus, the
careless selection of events in a story could cause audience splitting, causing
some people to give up keep watching because they were not pleased. This thesis
proposes a new methodology to create tailored stories for an audience based on
personality traits and preferences of each individual. As an audience may be
composed of individuals with similar or mixed preferences, it is necessary to
consider a middle ground solution based on the individual options. In addition,
individuals may have some kind of relationship with others who influence their
decisions. The proposed model addresses all steps in the quest to please the
audience. It infers what the preferences are, computes the scenes reward for all
individuals, estimates their choices independently and in group, and allows
Interactive Storytelling systems to find the story that maximizes the expected
audience reward. The proposed model can easily be extended to other areas that
involve users interacting with digital environments.
[15_PhD_figueiredo]
[15_MSc_pedras]
Aurélio
Moraes FIGUEIREDO.
Mapeamento de eventos sísmicos baseado em algoritmos de agrupamento de dados. [Title in
English: Mapping seismic events using clustering-based methodologies].
Ph.D. Thesis. Port. Presentation: 20/08/2015. 69 p. Advisor: Marcelo Gattass.
Abstract:
We present clustering-based methodologies used to process 3D seismic data. It
firstly replaces the volume voxels by corresponding feature samples representing
the local behavior in the seismic trace. After this step samples are used as
entries to clustering procedures, and the resulting cluster maps are used to
create a new representation of the original volume data. This strategy finds the
global structure of the seismic signal. It strongly reduces the impact of noise
and small disagreements found in the voxels of the entry volume. These clustered
versions of the input seismic data can then be used in two different
applications: to map 3D horizons automatically and to produce visual attribute
volumes where seismic faults and any discontinuities present in the data are
highlighted. Concerning the horizon mapping, as the method does not use any
lateral similarity measure to organize horizon voxels into clusters, the
methodology is very robust when mapping difficult cases. It is capable of
mapping a great portion of the seismic interfaces present in the data. In the
case of the visualization attribute, it is constructed by applying an
auto-adaptable function that uses the voxel neighboring information through a
specific measurement that globally highlights the fault regions and other
discontinuities present in the original volume. We apply the methodologies to
real seismic data, mapping even seismic horizons severely interrupted by various
discontinuities and presenting visualization attributes where discontinuities
are adequately highlighted.
Bernardo Frankenfeld Villela PEDRAS.
EnvironRC: integrating collaboration and mobile communication to offshore
engineering virtual reality applications. [Title in Portuguese:
EnvironRC:
integrando colaboração e comunicação móvel a aplicações de Engenharia Off-Shore
em ambiente de realidade virtual].
M.Sc. Diss. Eng. Presentation: 11/09/2015. 90 p. Advisor: Alberto Barbosa
Raposo.
Abstract:
Offshore Engineering visualization applications are, on most
cases, very complex and should display a lot of data coming from very
computational intensive numerical simulations. To help analyze and better
visualize the results, 3D visualization can be used in conjunction with a
Virtual Reality (VR) environment. The main idea for this work began as we
realized two different demands that engineering applications had when running on
VR setups: firstly, a demand for visualization support in the form of better
navigation and better data analysis capabilities. Secondly, a demand for
collaboration, due to the difficulties of coordinating a team with one member
using VR. To meet this demands, we developed a Service Oriented Architecture
(SOA) capable of adding collaboration capabilities to any application. The idea
behind our solution is to enable real-time data visualization and manipulation
on tablets and smartphones. Such devices can be used to help navigate the
virtual world or be used as a second screen, helping visualize and manipulate
large sets of data in the form of tables or graphs. Furthermore, we want to
allow collaboration-unaware application to collaborate with as little reworking
of the original application as possible. Another big advantage that mobile
devices bring to the engineering applications is the capability of accessing the
data on remote locations, like on oil platforms or refineries, and so allowing
the field engineer to check the data or even change it on the fly. As our test
application, we used ENVIRON, which is a VR application for visualization of 3D
models and simulations developed in collaboration with a team from the Institute
Tecgraf of PUC-Rio. We added this solution to ENVIRON and it was tested with an
experiment and during a review process of Offshore Engineering using VR Setups
(Power wall and CAVE).
[15_MSc_cruz]
Breno
Riba da Costa CRUZ.
Uma interface de programação para controle de sobrecarga em arquiteturas
baseadas em estágios. [Title in English:
A programming
interface for overload control
in staged event based architectures].
M.Sc. Diss. Port. Presentation: 24/02/2015. 75 p. Advisors: Noemi da La Roque
Rodriguez and Ana Lúcia de Moura.
Abstract:
Specific scheduling policies can be appropriate for overload control in
different application scenarios. However, these policies are often difficult to
implement, leading developers to reprogram entire systems in order to adapt them
to a particular policy. Through the study of various scheduling policies, we
propose an interface model that allows the programmer to integrate new policies
and monitoring schemes to the same application in a Staged Event-Driven
Architecture. We describe the implementation of the proposed interface and the
results of it's use in implementing a set of scheduling policies for two
applications with different load profiles.
[15_MSc_chagas]
Bruno
Azevedo CHAGAS.
End-user configuration in assistive technologies: a case study with a severely
physically impaired user. [Title in Portuguese:
Configuração pelo usuário final em tecnologias assistivas: um estudo de caso com
um usuário com limitação física severa].
M.Sc. Diss. Port. Presentation: 04/09/2015. 124 p. Advisors: Hugo Fuks and
Clarisse Sieckenius de Souza.
Abstract:
Assistive Technology (AT) aims at compensating for motor, sensory or cognitive
functional limitations of its users. One of the reasons AT is hard to design and
turn into a product is the variability of kinds and degrees of disabilities and
individual characteristics among users (physical, psychological, cultural and
environmental). This variability can be addressed by means of configurations.
This work takes as a starting point the premise that the ability for the
end-user to adapt AT may have the potential to improve user’s experience and the
quality of the products. However, before engaging in such endeavor we must
answer questions like: what is configuration in the AT domain? What does AT mean
to users (and stakeholders)? What could, should or should not be configured and
how? In this work, we conducted a case study mixing ethnography and action
research with a single tetraplegic participant who came to our lab seeking for
technology that could help him in his daily life. First, we interviewed him and
observed his daily needs and activities and then we developed an AT platform
prototype that controls some devices to be operated simultaneously by gesture
and voice interaction in “his smart home.” Throughout two action-research
cycles, we investigated interaction and technological issues regarding our
prototype configuration and use. Based on our findings, we propose a set of
dimensions and a collaborative framework for AT configuration. Our main
contribution is to propose a conceptual structure for organizing the AT
configuration problem space to support the design of similar technologies.
[15_PhD_cafeo]
Bruno
Barbieri de Pontes CAFEO.
On the relationship between feature dependencies and change propagation. [Title in Portuguese:
Investigando o relacionamento entre dependências de características e progação
de mudanças]. Ph. D. Thesis. Eng. Presentation: 12/06/2015. 167 p. Advisor:
Alesandro Fabricio Garcia.
Abstract:
Features are the key abstraction to develop and maintain software product lines.
A challenge faced in the maintenance of product lines is the understanding of
the dependencies that exist between features. In the source code, a feature
dependency occurs whenever program elements within the boundaries of a feature’s
implementation depend on elements external to that feature. Examples are either
attributes or methods defined in the realisation of a feature, but used in the
code realising other features. As developers modify the source code associated
with a feature, they must ensure that other features are consistently updated
with the new changes – the so-called change propagation. However, appropriate
change propagation is far from being trivial as features are often not
modularised in the source code. In this way, given a change in a certain
feature, it is challenging to reveal which (part of) other features should also
change. Change propagation becomes, therefore, a central and non-trivial aspect
of software product-line maintenance. Developers may overlook important parts of
the code that should be revised or changed, thus not fully propagating changes.
Conversely, they may also unnecessarily analyse parts that are not relevant to
the feature-maintenance task at hand, thereby increasing the maintenance effort
or even mis-propagating changes. The creation of a good mental model based on
the structure of feature dependencies becomes essential for gaining insight into
the intricate relationship between features in order to properly propagate
changes. Unfortunately, there is no understanding in the state of the art about
structural properties of feature dependencies that affect change propagation.
This understanding is not yet possible as: (i) there is no conceptual
characterisation and quantification means for structural properties of feature
dependency, and (ii) there is no empirical investigation on the influence of
these properties on change propagation. In this context, this thesis presents
three contributions to overcome the aforementioned problems. First, we develop a
study to understand change propagation in presence of feature dependencies in
several industry-strength product lines. Second, we propose a measurement
framework intended to quantify structural properties of feature dependencies. We
also develop a study revealing that conventional metrics typically used in
previous research, such as coupling metrics, are not effective indicators of
change propagation in software product lines. Our proposed metrics consistently
outperformed conventional metrics. Third, we also propose a method to support
change propagation by facing the organisation of feature dependency information
as a clustering problem. We evaluate if our proposed organisation has potential
to help developers to propagate changes in software product lines.
[15_MSc_amaral]
Bruno
Guberfain do AMARAL.
A visual
analysis of bus GPS data in Rio. [Title in Portuguese:
Uma
análise visual dos dados de GPS dos ônibus no Rio].
M.Sc. Diss. Port. Presentation: 12/06/2015. 45 p. Advisor: Hélio Cortes Vieira.
Abstract:
Smart cities is
a current subject of interest for public administrators and researchers. Getting
the cities smarter is one of the challenges for the near future, due to the
growing demand for public services. In particular, public transportation is one
of most visible aspects of a living city and, therefore, its implementation must
be very efficient. The public transportation system of the City of Rio de
Janeiro is historically deficient, mostly because it is based on an old bus
system. To change it, the City Hall took some actions, such as the development
of an open data project that shows, at about every minute, the GPS instant
position of all buses in the city. Although it is not a new technology, it is
the first initiative to be developed in Rio. This work presents simple tools for
the visual exploration of this big dataset based on the historical information
from this service, which reaches a total of more than one billion samples. With
these tools one is able to discover trends, identify patterns, and locate
abnormalities within the massive collection of the buses' GPS data.
[15_MSc_souza]
Bruno
José Olivieri SOUZA.
An approach for
movement coordination of swarms of unmanned aerial vehicles using mobile
networks. [Title in Portuguese:
Uma abordagem para a
coordenação do movimento de enxames de veículos aéreos não tripulados usando
redes móveis].
M.Sc. Diss. Port. Presentation: 18/03/2015. 67 p. Advisor: Markus Endler.
Abstract:
This work
presents an approach to coordinate swarms of Unmanned Aerial Vehicles (UAV)
based on Internet communication provided by mobile phone networks. Several
activities can be done by several UAVs flying in formation, such as surveillance
and monitoring of mass events, search and rescue tasks, control of agricultural
pests, monitoring and forest conservation, inspection of pipelines and
electricity distribution networks or even military attack and recognition
missions. Coordination of UAVs swarm can be branch in two sub-problems:
communication between members of the swarm and the algorithm that controls
members’ behaviors regarding their movements. The proposed solution assumes the
use of a smartphone coupled with each UAV of the swarm, in order to provide the
required level of reliable communication on the mobile Internet and run the
proposed algorithm for the coordination of swarms of UAVs. Experiments were
performed with emulated UAVs and WAN mobile networks. The results have
demonstrated the effectiveness of the proposed algorithm, and have shown the
influence of the network latency and the UAV speeds on the accuracy of the
movement coordination in the swarms.
[15_MSc_vanderley]
Carla Galdino VANDERLEY.
Uma sistemática de
monitoramento de erros em sistemas distribuídos.
[Title in English: A
systematic for error
monitoring in
distributed systems].
M.Sc. Diss. Port. Presentation: 01/04/2015. 56 p. Advisor: Arndt von Staa.
Abstract:
Systems formed by distributed components enable the occurrence of faults arising
from the interaction between components. The stage of testing this class of
systems is difficult, since foresee all interactions between components of a
system is not feasible. Therefore, even if a component system is tested, the
occurrence of run-time errors is still possible and, of course, these errors
should be seen shooting some action that prevents them from causing major
damage. This work presents an identification mechanism of errors given by
semantic inconsistency typed data, based on structured logs. Semantic
inconsistency of typed data could cause failures due to the misinterpretation of
values that are represented syntactically under the same basic types. The
proposed mechanism consists in generating structured logs according to the
definition of communication interfaces, and identifying anomalies by an existing
contract verification technique. In addition, the mechanism proposes a
management model of taxonomic semantic types using the Case-Based reasoning
technique (CBR). The fault identification mechanism was implemented through an
extension of the Robot Operation System middleware (ROS). The mechanism, in
addition to observing errors, generates additional information which aims to
assist the diagnosis of the cause of the observed error. Finally, a proof of
concept applied to a locomotion control system for a hybrid robot, adapted from
a real system, has been developed for fault identification validation.
[15_MSc_fernandes]
Chrystinne Oliveira FERNANDES.
IoT4Health: um framework no domínio de e-health para acompanhamento de pacientes
ulilizando agentes de software. [Title in English: IoT4Health:
a framework in e-health domain for patiente monitoring by using sofware agents].
M.Sc. Diss. Port. Presentation: 27/08/2015. 144 p. Advisor: Carlos José Pereira
de Lucena.
Abstract: The search for
innovative solutions in E-Health domain has largely motivated the conduct of
scientific research in this area, whose exploration can bring numerous benefits
to society. Despite the technological resources available nowadays, there are
still many problems in the hospital environment. Aiming contribute to the
development of technological solutions applied to this area, we highlighted the
development of the IoT4Health framework. Two instances of this framework were
built to serve as concept proof 1-Agents4Health; 2-Remote Patient Monitoring
(RPM); 3-EHealth System. The Agents4Health is a multi-agent system in E-health
domain supported by an Internet of Things (IoT) solution to automate techniques
commonly used in patients' treatment and data collection processes. This
solution comprises software agents and hardware prototypes including sensors,
micro-controllers that work together to make hospital environments more
proactive. In addition, the solution provides remote storage of patient data in
cloud-based platforms, allowing external professionals to work collaboratively
with the local team. A Web system enables real-time visualization of patient's
record captured through sensors, such as temperature and heart rate values
displayed as graphical charts through an intuitive interface. Software agents
constantly monitor collected data to detect anomalies in patients' health status
and send alerts to health professionals. The RPM also supports patient
monitoring acivities by using mobile applications, with focus on patient
evolution. Finally, the EHealth system comprises the set of applicaions created
in order to validate our tool.
[15_MSc_gama]
[15_MSc_marques]
[15_MSc_ferreira]
Daltro Simões GAMA.
Integração de um sistema de submissão batch com um mmbiente de computação
em nuvem. [Title in English: Integration of a batch subimission system with
a cloud computing environment].
M.Sc. Diss. Port. Presentation: 21/12/2015. 83 p. Advisors: Noemi de La Rocque
Rodriguez and Maria Júlia de Lima.
Abstract:
Cloud computing appeals to those who need many machines to run their programs,
attracted by low maintenance costs and easy configuration. In this work we
implemented a new integration for the CSGrid system, from Tecgraf/PUC-Rio,
enabling it to submit workloads to Microsoft Azure public cloud, thus enjoying
the benefits of elastic computing resources. For this purpose, we present
related works and some performance measures in the case of CSGrid's use of
Microsoft Azure public cloud, with regard to costs on data transfers and
provisioning of virtual machines. With this integration, we could evaluate the
benefits and difficulties involved in using cloud resources in a system designed
for the submission of HPC applications to clusters.
Daniel da Rosa MARQUES.
Um metaclassificador para encontrar k-classes mais relevantes. [Title in
English:
A metaclassifier for finding the k-classes most relevants].
M.Sc. Diss. Port. Presentation: 24/11/2015. 56 p. Advisor: Eduardo Sany Laber.
Abstract:
Consider a network with k nodes that may fail along its operation. Furthermore
assume that it is impossible to check all nodes whenever a failure occurs.
Motivated by this scenario, we propose a method that uses supervised learning to
generate rankings of the most likely nodes responsible for the failure. The
proposed method is a meta-classifier that is able to use any kind of classifier
internally, where the model generated by the meta-classifier is a composition of
those generated by the internal classifiers. Each internal model is trained with
a subset of the data created from the elimination of instances whose classes
were already put in the ranking. Metrics derived from Accuracy, Precision and
Recall were proposed and used to evaluate this method. Using a public data set,
we verified that the training and classification times of the meta-classifier
were greater than those of a simple classifier. However it reaches better
results in some cases, as with the decision trees, that exceeds the benchmark
accuracy for a margin greater than 5%.
Daniel Vitor Costa FERREIRA.
Lean communication-centered design: um processo leve de design centrado na
comunicação. [Title in English: Lean communication-centred design: a lightweight
design process].
M.Sc. Diss. Port. Presentation: 10/09/2015. 159 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract:
Lean Communication-Centered Design (LeanCCD) is a Human-Computer
Interaction (HCI) design process, which consists of conducting a workshop,
detailing user goals, combining interaction models with paper sketches, and
testing them with users, supported by guides and templates. This study adapted
the Communication-Centered Design (CCD) and the eXtreme Communication-Centered
Design (eXCeeD), other communication-centered design processes grounded on
Semiotic Engineering (SemEng). SemEng defines the interaction as a
computer-mediated communication process between designers and users. Approaches
and processes based on SemEng are not used to directly yield the answer to a
problem, but to increase the problem-solver’s understanding of the problem
itself and the implication it brings about. Process evaluation in a case study,
in the industry, proved itself difficult, both in carrying out LeanCCD
activities and in the correct application of some techniques and concepts.
However, unlike eXCeeD, we were able to observe a systematic use of questions
that contributed to designers’ reflection, aided by the proposed templates and
guides.
[15_MSc_silva]
[15_PhD_barbosa]
[15_MSc_menendez]
Davidson Felipe da SILVA. Keepfast – um
ambiente extensível dirigido por modelos para avaliação de desempenho de páginas
web.
[Title in English:
Keepfast - a model-driven framework for the evaluation web page performance].
MSc. Diss. Port. Presentation: 03/09/2015. 76 p. Advisor: Daniel Schwabe.
Abstract:
Over the years, there has been an increase in complexity on the client side of
Web application architecture. Consequently, the addition of functionality or
change in the implementations on the client side often leads to a drop in
performance, which should be avoided. This effect is compounded due to the
constant evolution of implementation technologies and the growing number of
devices with web connection. This work presents a Model-driven framework for the
assessing web page performance, in the application client side, allowing for a
variety of performance evaluation contexts. This environment can be extended and
customized to reflect the most important features that the designer wants to
evaluate. A case study is presented, showing the results obtained in a real
scenario compared to other methodologies available.
Eiji Adachi Medeiros BARBOSA. Global-aware
recommendations for repairing exception handling violations.
[Title in Portuguese: Recomendações globais para reparaçâo de
violações de tratamento de exceções].
MSc. Diss. Eng. Presentation: 00/11/2015. 213 p. Advisor: Alessandro Fabrício
Garcia.
Abstract:
Exception handling is the most common way of dealing with
exceptions in robust software development. Exception handling refers to the
process of signaling exceptions upon the detection of runtime errors and taking
actions to respond to their occurrence. Despite being aimed at improving
software robustness, software systems are still implemented without relying on
explicit exception handling policies. Each policy defines the design decisions
governing how exception handling should be implemented in a system. These
policies are often not documented and are only implicitly defined in the system
design. Thus, developers tend to introduce in the source code violations of
implicit policies and these violations commonly cause failures in software
systems. In this context, the goal of this thesis is to support developers in
detecting and repairing exception handling violations. To achieve this goal, two
complementary solutions were proposed. The first solution is based on a
domain-specific language supporting the detection of violations by explicitly
defining exception handling policies to be enforced in the source code. The
proposed language was evaluated with a user-centric study and a case study. With
the observations and experiences gathered in the user-centric study, we
identified some language characteristics that hindered its use and that
motivated new language constructs. In addition, the results of the case study
showed that violations and faults in exception handling share common causes.
Therefore, violations can be used to detect potential causes of exceptionrelated
failures. To complement the detection of exception handling violations, this
work also proposed a solution for supporting the repair of exception handling
violations. Repairing these violations requires reasoning about the global
impact that exception handling changes might have in different parts of the
system. Thus, this work proposed a recommender heuristic strategy that takes
into account the global context of where violations occur to produce
recommendations. Each recommendation produced consists of a sequence of
modifications that serves as a detailed blueprint of how an exception handling
violation can be removed from the source code. The proposed recommender strategy
also takes advantage of explicit policy specifications, although their
availability is not mandatory. The results of our empirical assessments revealed
that the proposed recommender strategy produced recommendations able to repair
violations in approximately 70% of the cases. When policy specifications are
available, it produced recommendations able to repair violations in 97% of the
cases.
Elisa Souza MENENDEZ. Materialized sameAs link maintenance with views.
[Title in English:
Manutenção de links sameAs materializados utilizando visões].
MSc. Diss. Port. Presentation: 20/07/2015. 68 p. Advisor: Marco Antonio
Casanova.
Abstract:
In the Linked Data field, data publishers frequently materialize sameAs links
between two different datasets using link discovery tools. However, it may be
difficult to specify linking conditions, if the datasets have complex models. A
possible solution lies in stimulating dataset administrators to publish simple
predefined views to work as resource catalogues. A second problem is related to
maintaining materialized sameAs linksets, when the source datasets are updated.
To help solve this second problem, this work presents a framework for
maintaining views and linksets using an incremental strategy. The key idea is to
re-compute only the set of updated resources that are part of the view. This
work also describes an experiment to compare the performance of the incremental
strategy with the full re-computation of views and linksets.
[15_PhD_leal]
Ericsson
de Souza LEAL.
Traçado de linhas de fluxo em modelos de reservatórios naturais de petróleo
baseado em métodos numéricos adaptativos.
[Title in English:
Streamline tracing for oil natural reservoirs based on adaptive numerical
methods].
MSc. Diss. Port. Presentation: 01/09/2015. 51 p. Advisor: Waldemar Celles Filho.
Abstract:
Traditionally, streamlines in discrete models of natural oil reservoirs are
traced by solving an ordinary differential equation in a cell-by-cell way, using
analytical or numerical solutions, considering the local velocity of each cell.
This strategy has a disadvantage: the streamline is traced considering a
discrete, and so discontinuous, vector field. Furthermore, for massive models,
to solve the equation in a cell-by-cell way may be inefficient. In this work, we
explore a different strategy: the streamline tracing considers a continuous
vector field represented by the discrete model. Therefore, we propose: (i) to
use a spatial structure to speed up the point location process inside the
reservoir model; (ii) to use spherical interpolation to obtain the velocity
field from the discrete model; (iii) to use an adaptive numerical method to
control the numerical error from the integration process. The results obtained
for actual reservoir models demonstrate that the proposed method fulfills the
precision requirements, keeping a good performance.
[15_PhD_sacramentoferreira]
Eveline Russo SACRAMENTO
FERREIRA.
An
approach for dealing with inconsistencies in data mashups. [Title in Portuguese: Uma abordagem para
lidar com inconsistências em combinações da dados].
MSc. Diss. Eng. Presentation: 11/09/2015. 100 p. Advisor: Marco Antonio
Casanova.
Abstract:
With the amount of data available on the Web, consumers can “mashup”and quickly
integrate data from different sources belonging to the same application domain.
However, data mashups constructed from independent and heterogeneous data
sources may contain inconsistencies and, therefore, puzzle the user when
observing the data. This thesis addresses the problem of creating a consistent
data mashup from mutually inconsistent data sources. Specifically, it deals with
the problem of testing, when data to be combined is inconsistent with respect to
a predefined set of constraints. The main contributions of this thesis are: (1)
the formalization of the notion of consistent data mashups by treating the data
returned from the data sources as a default theory and considering a consistent
data mashup as an extension of this theory; (2) a model checker for a family of
Description Logics, which analyzes and separates consistent from inconsistent
data and also tests the consistency and completeness of the obtained data
mashups; (3) a heuristic procedure for computing such consistent data mashups.
[15_PhD_guimaraes]
Everton Tavares GUIMARÃES.
A blueprint-based approach for prioritizing and
ranking
critical code anomalies.
[Title in Portuguese:
Uma abordagem baseada em blueprints para priorização e classificação de
anomalias de código críticas.
MSc. Diss. Port. Presentation: 12/09/2015. 142 p. Advisor: Alessandro Fabrício
Garcia.
Abstract:
Software systems are often evolving due to many changing requirements. As the
software evolves, it grows in size and complexity, and consequently, its
architecture design tends to degrade. Architecture degradation symptoms are
often a direct consequence of the progressive insertion of code anomalies in the
software implementation. A code anomaly is a recurring implementation structure
that possibly indicates deeper architectural design problems. Code anomaly is
considered critical when it is related with a structural problem in the software
architecture. Its criticality stems from its negative influence on a wide range
of non-functional requirements. For instance, the presence of critical code
anomalies hinders software maintainability, i.e. these critical anomalies
require wide refactoring in order to remove an architectural problem. Symptoms
of architecture degradation have often to be observed in the source code due to
the lack of an explicit, formal representation of the software architecture in a
project. Many approaches are proposed for detecting code anomalies in software
systems, but none of them efficiently support the prioritization and ranking of
critical code anomalies according to their architecture impact. Our work
investigates how the prioritization and ranking of such critical code anomalies
could be improved by using blueprints. Architecture blueprints are usually
provided by software architects since the early stages of the system
development. Blueprints are informal design models usually defined to capture
and communicate key architectural design decisions. Even though blueprints are
often incomplete and inconsistent with respect to the underlying implementation,
we aim to study if their use can contribute to improve the processes of
prioritizing and ranking critical code anomalies. Aiming to address these
research goals, a set of empirical studies has been performed. We also proposed
and evaluated a set of heuristics to support developers when prioritizing and
ranking code anomalies in 3 software systems. The results showed an average
accuracy higher than 60% when prioritizing and ranking code anomalies associated
with architectural problems in these systems.
[15_MSc_bertti]
Ezequiel BERTTI.
MIRA – um ambiente
para interfaces dirigidas por modelos para aplicações REST.
[Title in Portuguese:
MIRA –
a model-driven interface framework for REST applications].
MSc. Diss. Port. Presentation: 02/03/2015. 137 p. Advisor: Daniel Schwabe.
Abstract:
This
work presents a Model-driven framework for the design of interfaces for REST
applications. The framework allows building interfaces with minimal programming,
The models used, as well as the generated interfaces are represented using W3C
standards. A qualitative evaluation indicates that there are gains in both
productivity and quality of the generated interfaces, when compared with
traditional approaches.
[15_PhD_silva]
Fabio Araujo Guilherme da SILVA.
Emotions in plots
with non-deterministic planning for interactive storytelling.
[Title in Portuguese:
Emoções em enredos
com planejamento não-determinístico para narração interativa].
Ph.D. Thesis. Port. Presentation: 14/04/2015. 156 p. Advisor: Antonio L. Furtado.
Abstract:
Interactive
storytelling is a form of digital entertainment in which users participate in
the process of composing and dramatizing a story. In this context, determining
the characters’ behaviour according to their individual preferences can be an
interesting way of generating plausible stories where the characters act in a
believable manner. Diversity of stories and opportunities for interaction are
key requirements to be considered when designing such applications. This thesis
proposes the creation of an architecture and a prototype for the generation and
dramatization of interactive nondeterministic plots, using a model of emotions
that not only serves to guide the actions of the characters presented by the
plan generation algorithm, but also influences the participation of the users.
Also, to improve the quality and diversity level of the stories, characters must
be able to evolve in terms of their personality traits as the plot unfolds, as a
reflection of the events they perform or are exposed to.
[15_MSc_moreira]
Felipe Baldino MOREIRA.
An
experiment on conceptual design of pervasive mobile games using quality
requirements.
[Title in Portuguese: Um experimento no design
conceitual de jogos pervasivos móveis usando requisitos de qualidade].
MSc. Diss. Port. Presentation: 10/07/2015. 88 p. Advisor: Bruno Feijó.
Abstract:
Pervasive games is an emerging
game genre that mixes up mobile devices (such as smartphones), virtual worlds,
and gameplay based on the real world, creating a mixed-reality game. This recent
area lacks literature about conceptual design and quality requirements related
to pervasiveness – the ultimate and elusive quality that differentiate pervasive
mobile games from traditional digital games. In the present work, we discuss the
development of a pervasive mobile game using quality requirements related to
pervasiveness. Also, we consider those requirements in the entire game project
(e.g., design, production, and post-production stages), focusing on the
analysis, implementation, and gameplay. We expect that our results could help in
improving the current state of design guidelines to develop pervasive mobile
games.
[15_MSc_ferreira]
Fischer Jônatas FERREIRA.
Uma análise da
eficácia de assertivas executáveis como observadora de falhas em software.
[Title in Portuguese:
An
effective analysis of executable assertives as indicators of software fails].
MSc. Diss. Port. Presentation: 09/04/2015. 117 p. Advisor: Arndt von Staa.
Abstract:
Absolute reliability of software is considered unattainable, because even when
it is build following strict quality rules, software is not free of failure
occurrences during its lifetime. Software’s reliability level is related, among
others, to the amount of remaining defects that will be exercised during its
use. If software contains less remaining defects, it is expected that failures
will occur less often, although many of these defects will never be exercised
during its useful life. However, libraries and remote services of dubious
quality are frequently used. In an attempt to enable software to check mistakes
at runtime, hypothetically Lightweight Formal Methods, by means of executable
assertions, can be effective and economically viable to ensure software’s
reliability both at test time as well as at run-time. The main objective of this
research is to evaluate the effectiveness of executable assertions for the
prevention and observation of run-time failures. Effectiveness was evaluated by
means of experiments. We instrumented data structures with executable
assertions, and subjected them to tests based on mutations. The results have
shown that all non-equivalent mutants were detected by assertions, although
several of them were not detected by tests using non-instrumented versions of
the programs. Furthermore, estimates of the computational cost for the use of
executable assertions are presented. Based on the infrastructure created for the
experiments we propose an instrumentation policy using executable assertions to
be used for testing and to safeguard run-time.
[15_MSc_cunha]
Francisco José Plácido CUNHA.
JAT4BDI: uma nova abordagem de testes
para agentes deliberativos. [Title in English:
JAT4BD: a new approach to testing
deliberative agents].
MSc. Diss. Port. Presentation: 17/12/2015. 82 p. Advisor: Carlos José Pereira de
Lucena.
Abstract:
The growth and popularity of the Web has fueled the development of
software-based network. The use of multi-agent systems (MAS) in this context is
considered a promising approach has been applied in different areas such as
security, or mission critical business scenarios, enhanced monitoring of
environments and people, etc., which means analyzing the choices that this type
of software can become crucial. However, the methodologies proposed so far by
the Software Engineering Oriented Agents (AOSE) focused their efforts mainly on
developing disciplined approach to analyze, design and implement an SMA and
little attention has been given to how such systems can be tested. Furthermore,
with regard to tests involving software agents, some issues related to the
controllability and observability difficult the task of checking the behavior,
such as: (i) the duration of the agent in its decision-making process; (ii) the
fact of the agent's beliefs and goals are embedded in the agent itself,
hampering the observation and control of behavior; (iii) problems associated
with test coverage. In this research a novel approach for unit testing of agents
written in BDI4JADE BDI based on the combination and arrangement of ideas
supported by JAT Framework, a framework for testing agents written in JADE and
fault model proposed by Zhang is displayed.
[15_MSc_amorim]
Franklin
Anderson AMORIM.
Mineração de itens frequentes em sequências de dados: uma implementação
eficiente usando vetores de bits. [Title in English:
Mining frequent itemsets in data streams: an efficient implementation using bit
vectors].
MSc. Diss. Port. Presentation: 04/09/2015. 48 p. Advisor: Marco Antonio
Casanova.
Abstract: The mining of frequent itemsets in data streams has several
practical applications, such as user behavior analysis, software testing and
market research. Nevertheless, the massive amount of data generated may pose an
obstacle to processing then in real time and, consequently, in their analysis
and decision making. Thus, improvements in the efficiency of the algorithms used
for these purposes may bring great benefits for systems that depend on them.
This thesis presents the MFI-TransSW+ algorithm, an optimized version of
MFI-TransSW algorithm, which uses bit vectors to process data streams in real
time. In addition, this thesis describes the implementation of a news articles
recommendation system, called ClickRec, based on the MFI-TransSW+, to
demonstrate the use of the new version of the algorithm. Finally, the thesis
describes experiments with real data and presents results of performance and a
comparison between the two algorithms in terms of performance and the hit rate
of the ClickRec recommendation system.
[15_PhD_lima]
Guilherme Augusto Ferreira LIMA. A
synchronous virtual machine for multimedia presentations.
[Title in Portuguese: Uma máquina virtual síncrona
para apresentações multimídia].
Ph. D. Thesis. Eng. Presentation: 01/12/2015. 134 p. Advisor: Luiz Fernando
Gomes Soares.
Abstract:
Current high-level multimedia languages are limited. Their limitation stems not
from the lack of features but from the complexity caused by the excess of them
and, more importantly, by their unstructured definition. Languages such as NCL,
SMIL, and HTML define innumerable constructs to control the presentation of
audiovisual data, but they fail to describe how these constructs relate to each
other, especially in terms of behavior. There is no clear separation between
basic and derived constructs, and no apparent principle of hierarchical build-up
in their definition. Users may not need such principle, but it is indispensable
for the people who define and implement these languages: it makes specifications
and implementations manageable by reducing the language to a set of basic
(primitive) concepts. In this thesis, a set of such basic concepts is proposed
and taken as the language of a virtual machine for multimedia presentations.
More precisely, a novel high-level multimedia language, called Smix (Synchronous
Mixer), is presented and defined to serve as an appropriate abstraction layer
for the definition and implementation of higher level multimedia languages. In
defining Smix, that is, choosing a set of basic concepts, this work strives for
minimalism but also aims at tackling major problems of current high-level
multimedia languages, namely, the inadequate semantic models of their
specifications and unsystematic approaches of their implementations. On the
specification side, the use of a simple but expressive synchronous semantics,
with a precise notion of time, is advocated. On the implementation side, a
two-layered architecture that eases the mapping of specification concepts into
digital signal processing primitives is proposed. The top layer (front end) is
the realization of the semantics, and the bottom layer (back end) is structured
as a multimedia digital signal processing dataflow.
[15_MSc_zampronio]
Guilherme Bezerra ZAMPRONIO.
Um sistema de simulação 3D de evacuação de emergência em plataformas de petróleo.
[Title in English: A 3D simulation system for
emergency evacuation in offshore platforms].
M.Sc. Diss. Port. Presentation: 02/03/2015. 57 p. Advisor: Alberto Barbosa
Raposo.
Abstract:
An application for evacuation simulation using computational resources may help
previewing situations, flows, conflicts, and behaviors that may only happen in a
real danger situation. This kind of application enables the execution of several
pre-defined scenarios at any time, without the expensive and complex allocation
of real people. This dissertation proposes an emergency simulation system on oil
platforms in 3D with real-time results using as architecture a game engine
(Unity). The solution developed was tested in of real platforms models for
comparison with times obtained in emergency simulations with people. System
performance will be exposed, as well as future works.
[15_MSc_borges]
Heraldo Pimenta BORGES FILHO.
Predição do comportamento do mercado financeiro utilizando notícias em Português.
[Title in English:
Stock
market behavior prediction using financial news in Portuguese].
M.Sc. Diss. Port. Presentation: 29/08/2015. 51 p. Advisor: Ruy Luiz Milidiu.
Abstract: A
set of financial theories, such as the efficient market hypothesis and the
theory of random walk, says it is impossible to predict the future of the stock
market based on currently available information. However, recent research has
proven otherwise by finding a relationship between the content of a news and
current behavior of an stock. Our goal is to develop and implement a prediction
algorithm that uses financial news about joint-stock company to predict the
stock's behavior on the stock exchange. We use an approach based on machine
learning for the task of predicting the behavior of an stock in positions of up,
down or neutral, using quantitative and qualitative information, such as
financial. We evaluate our system on a dataset with six thousand news and our
experiments indicate an accuracy of 68.57% for the task.
[15_MSc_gualandi]
Hugo Musso GUALANDI.
Typing dynamic languages – a review.
[Title in Portuguese: Phase unwrapping 2D via
floresta geradora mínima com restrições de balanceamento].
M.Sc. Diss. Eng. Presentation: 08/09/2015. 89 p. Advisor: Roberto Ierusalimschy.
Abstract: Programming languages have
traditionally been classified as either statically typed or dynamically typed,
the latter often being known as scripting languages. Dynamically typed languages
are very popular for writing smaller programs, a setting where ease of use and
flexibility of the language are highly valued. However, with time, small scripts
tend to evolve into large systems and the flexibility of the dynamic language
may become a source of program defects. For these larger systems, static typing,
which offers compile-time error detection, improved documentation and
optimization opportunities, becomes more attractive. Since rewriting the whole
system in a statically typed language is not ideal from a software engineering
point of view, investigating ways of adding static types to existing dynamically
typed programs has been a thriving research area. In this work, we present a
historical overview of this research. We focus on general approaches that apply
to multiple programming languages, such as the Type Hints of Common LISP, the
Soft Typing of Fagan et al and the Gradual Typing of Siek et al, contrasting
these different solutions from a modern perspective.
[15_MSc_herszterg]
Ian Hodara HERSZTERG.
2D phase unwrapping via minimum spanning forest with balance constraints.
[Title in Portuguese: Phase unwrapping 2D via
floresta geradora mínima com restrições de balanceamento].
M.Sc. Diss. Eng. Presentation: 31/08/2015. 89 p. Advisors: Thibault Vidal and
Marcus Vinicius Soledade Poggi de Aragão (co-advisor).
Abstract: The development and application of techniques in coherent signal
processing have greatly increased over the past several years. Synthetic
aperture radar, acoustic imaging, magnetic resonance, X-Ray crystallography and
seismic processing are just a few examples in which coherent processing is
required, resulting in the need for accurate and efficient phase unwrapping
methods. The phase unwrapping problem consists in recovering a continuous phase
signal from an originally wrapped phase data between the [symbol] interval. This
dissertation proposes a new model for the L0-norm
2D phase unwrapping problem, in which the singularities of the wrapped phase
image are associated to a graph where the vertices have different polarities (+1
and -1). The objective is to find a minimum cost balanced spanning forest where
the sum of polarities is equal to zero in each tree. A set of primal and dual
heuristics, a branchand-cut algorithm and a hybrid metaheuristic method are
proposed to address the problem, leading to an efficient approach for L0-norm
2DPU, previously viewed as highly desirable but intractable. A set of
experimental results illustrates the effectiveness of the proposed algorithm,
and its competitiveness with state-of-the-art algorithms.
[15_PhD_monteiro]
Ingrid Texeira MONTEIRO.
Autoexpressão e
engenharia semiótica do usuário-designer.
[Title in English:
User
designer’s
self-expression
and semiotic engineering].
Ph.D. Thesis. Port. Presentation: 15/04/2015. 312 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: This thesis presents research in the area the area of End-User
Development (EUD). The first studies in EUD have emerged as an attempt to help
end users achieve specific goals of personalization and customization of
interfaces and systems, primarily for their own benefit. As needs evolve, end
users have to know and often master more complex computing concepts and
practices. In this context, there have been a growing number of initiatives to
encourage, teach and support users in programming and thinking computationally.
In general, much emphasis is given to problem solving, logical reasoning and
other common computer scientists’ skills. However, supported by Semiotic
Engineering, a semiotics-based theory that describes human-computer interaction
as communication between designers and users, we believe that interactive
computer systems are communication artifacts: that the person who creates the
system sends various messages, with particular characteristics to the person who
uses it. In this thesis, we present an extensive study in which end users,
acting as designers, create computational artifacts for communication purposes.
Research has shown that the participants took programming and other development
activities not as end in themselves but as a means to build their messages. We
discuss how the change in perspective (from problem-solving to communication)
reveals a range of underexplored phenomena, such as self-expression of the
designers and the pragmatics of interaction languages they build. Another
contribution of this thesis is an extension to Semiotic Engineering, named EUME
– End-User Semiotic Engineering, a new way to look at Semiotic Engineering, in
the perspective of end users acting as designers.
[15_PhD_barbosa]
Ivanildo BARBOSA.
Avaliação do impacto de acidentes de
trânsito no tráfego de vias urbanas a partir de valores de velocidade. [Title in English:
Assessment of the impact of traffic accidents on the flow of urban roads based
on speed values].
Ph.D. Thesis. Port. Presentation: 27/03/2015. 175 p. Advisor: Marco Antonio
Casanova.
Abstract: A major concern in large cities is to minimize the effects of the
increasing quantity of vehicles in circulation and, consequently, of the
accidents that tend to occur more frequently. Due to the popularization and
miniaturization of GPS receivers, the availability of large volumes of data
about vehicle speed in urban roads and the large number of traffic-related
messages published in social networks, it is now possible to collect enough
input data to model traffic conditions based on the observed reduction in speed
values. However, it is necessary to filter the data to minimize thematic,
spatial and temporal uncertainties. This thesis proposes a methodology to assess
the impact of traffic accidents by analyzing speed values. To achieve this goal,
it also proposes auxiliary methodologies, aiming at: (1) processing GPS-tracked
routes to compute speed statistics and estimate traffic in two-way streets, by
performing direction analysis; (2) representing traffic behavior based on the
observed speed values; (3) extracting and selecting accident-related data by
mining Twitter posts for later identification of the likely effects on speed
values. The main contributions of this thesis are: (1) the assessment of traffic
conditions based on speed values, which are easier to acquire than data about
traffic volume and concentration; (2) the use of posts from social networks,
which provide timely access to traffic events; (3) the assessment of urban roads
instead of freeways or roads, which require modeling intersections, traffic
lights and pedestrian flow; and (4) a methodology designed to extract speed
statistics from raw GPS data, which handles likely error sources related to both
map matching process and temporal classification.
[15_MSc_escobar]
Jaisse Grela ESCOBAR.
Uma ferramenta para o rastreamento de vídeos e imagens utilizando técnicas de
esteganografia. [Title in Portuguese: A tool for tracking videos and images
using steganography techniques].
MSc. Diss. Port. Presentation: 10/04/2015. 57 p. Advisor: Bruno Feijo.
Abstract: In the TV industry, leaks of film materials occur frequently when
they are distributed among the members of the production team, causing great
harm to the companies. In this paper, we propose a tool that allows detecting
the source of the leak with a high degree of confidence, using techniques of
adaptive steganography. An important requirement is that the information
embedded in the video (or image) should resist to processing operations such as
resizing and resolution changes. The idea is to use the “Speeded Up Robust
Features” (SURF) algorithm, a well-known strategy for detection and description
of images features, to detect the robust regions of the image and insert a small
masked identification in them. The tool uses the "Haar - Discrete Wavelet
Transform" in two dimensions and then modifies the image. This dissertation
proposes promising initial directions for secure identification of the
certificate of origin of digital images and videos.
[15_MSc_bastos]
João Antonio Dutra Marcondes BASTOS.
Apoio à transferência de conhecimento
de raciocínio computacional de linguagens de programação visuais para linguagens
de programação textuais.
[Title in English: Support for computational
thinking knowledge transfer from visual programming languages to textual
programming languages].
M.Sc. Diss. Port. Presentation: 13/04/2015. 76 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: Producing technology has been an increasingly essential ability in
modern society. The users are no longer simple consumers but actually, also,
technology producers, using technology to express their ideas. In this context,
the learning of the so-called "computational thinking" should be as important as
learning basic disciplines such as reading, writing and arithmetic. As long as
the student can develop this ability, he will be able to express himself or
herself through the software. Many projects around the world have their own
technologies and pedagogy to help the student develop such capacity. However, we
know that in a context that is constantly evolving as is the case of
informatics, we cannot allow the student to be attached to a single tool or
means. Tools may become obsolete and students would lose their technology
producer status. With this in mind, we designed a learning transfer model of
"computational thinking", which will assist the designer in the creation of a
technological artifact to help students and teachers learn a new programming
language. The model, which is based on the Semiotic Engineering, is the main
scientific contribution of this master's dissertation.
[15_PhD_ferreira]
Juliana Soares Jansen FERREIRA.
Comunicação através de modelos no contexto do desenvolvimento de
software. [Title in English:
Communication through models in the context of software development].
Ph.D. Thesis. Port. Presentation: 10/04/2015. 200 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: Software development is a highly collaborative process where
software construction is the common goal. It is supported at several stages by
computer tools, including software modeling tools. Models are important
artifacts of the software development process and constitute the focus of this
research, which aims to investigate the ‘communicability’ of software models
produced and consumed with the support of modeling tools. Software model
communicability is the capacity that such artifacts have of carrying and
effecting a communication process among people, or of being used as an
instrument to perform a significant part of such process. Modeling tools have a
direct impact in that communicability, since model’s producers and consumers
interact with those tools throughout the software development process. During
that interaction, software models, which are intellectual artifacts, are
created, changed, evolved, transformed and shared by people involved in
activities of specification, analysis, design and implementation of the software
under development. Besides the influence of tools, software modeling also needs
to take into consideration previously defined notations as premises for modeling
activities. This research is an investigation on how tools and notations
influence and support the intellectual process of production and consumption of
software models. We have Semiotic Engineering as our guiding theory given the
essence of it that is: a careful study of tools people interact with to build,
use and publish models through which they coordinate teamwork. The use of models
in the software development process is a phenomenon that includes several
factors that cannot be isolated from each other. Therefore, we propose a
Tool-Notation-People triplet (TNP triplet) as a means of articulation to
characterize observed issues about models in the software development. Along
with the TNP triplet, we introduce a method that combines semiotic and cognitive
perspectives to evaluate software modeling tools, producing data about the
emission of designer-user metacommunication, users in this case being software
developers. We aim to track potential relations between the human-computer
interaction experience of those involved in the software development process
while creating/reading/editing models with: (a) the product (types of models)
generated in the process; and (b) the interpretations that such models evoke
when used effectively in everyday practical situations to ‘communicate and
express ideas and understandings’. The interest of working with Semiotic
Engineering in this research is twofold. First, as an ‘observation lens’, the
theory offers many resources to investigate and understand the construction and
use of computational artifacts, their meanings and roles in the communication
process. Second, a better perspective about the complete process that results,
ultimately, in the user experience during the interaction with the software is
relevant for the theory’s own evolution. In other words, this research has
produced further knowledge about the communication conditions and mutual
understanding of those who, according to the theory, ‘communicate their intent
and design principles through the interface’, a potentially valuable source of
explanations about communication problems in HCI.
[15
MSc_fontoura]
Leonardo Lobo da Cunha da FONTOURA.
On the min distance superset
problem.
[Title in Portuguese: Sobre o problema de superset
mínimo de distâncias.
M.Sc. Diss. Eng. Presentation: 19/08/2015. 48 p. Advisors: Thibaut Vidal and
Marcus Vinícius Soledade Poggi de Aragão.
Abstract: The Partial Digest Problem, also known as the Turnpike Problem,
consists of building a set of points on the real line given their unlabeled
pairwise distances. A variant of this problem, named Min Distance Superset
Problem, deals with incomplete input in which distances may be missing. The goal
is to find a minimal set of points on the real line such that the multiset of
their pairwise distances is a superset of the input. The main contributions of
this work are two different mathematical programming formulations for the Min
Distance Superset Problem: a quadratic programming formulation and an integer
programming formulation.We show how to apply direct computation methods for
variable bounds on top of a Lagrangian relaxation of the quadratic formulation.
We also introduce two approaches to solve the integer programming formulation,
both based on binary searches on the cardinality of an optimal solution. One is
based on a subset of decision variables, in an attempt to deal with a simpler
feasibility problem, and the other is based on distributing available distances
between possible points.
[15_MSc_benevides]
Leonardo
de Paula Batista BENEVIDES.
Oclusão de ambiente para renderização de linhas.
[Title in English: Ambient occlusion for line
rendering].
M.Sc. Diss. Port. Presentation: 25/09/2015. 66 p. Advisor: Waldemar Celes Filho.
Abstract: The three-dimensional understanding of dense line sets requires
the use of more sophisticated lighting models. Ambient occlusion is a technique
used to simulate realistically and efficiently, the indirect ambient lighting.
This paper presents a new algorithm for rendering lines with ambient occlusion.
The proposed algorithm is based on the voxelization of the scene and on the
computation of occlusionin the hemisphere associated to each visible point. It
is proposed an adaptation of the voxelization algorithm of 3D scenes made up of
solids to the correct treatment of the scene formed by lines. Thus, a volumetric
geometry description is created in a texture buffer. The hemisphere around every
visible point is sampled by several points, and for each sample is generated a
prism, which occluded volume is calculated from the voxelization. By
accumulating the results of each sample, the estimated ambient occlusion caused
by the geometry at each point visible to the observer is computed. This strategy
proved to be appropriate, resulting in high-quality images in real time for
complex scenes.
[15_MSc_campagnolo]
Leonardo Quatrin CAMPAGNOLO.
Visualização volumétrica precisa
baseada em integração numérica adaptativa.
[Title in English: Accurate volume rendering based
on adaptive numerical integration].
M.Sc. Diss. Port. Presentation: 19/08/2015. 76 p. Advisor: Waldemar Celes Filho.
Abstract: One of the main challenges in volume rendering algorithms is how
to compute the Volume Rendering Integral accurately, while maintaining good
performance. Commonly, numerical methods use equidistant samples to approximate
the integral and do not include any error estimation strategy to control
accuracy. As a solution, adaptive numerical methods can be used, because they
can adapt the step size of the integration according to an estimated numerical
error. On CPU, adaptive integration algorithms are usually implemented
recursively. On GPU, however, it is desirable to eliminate recursive algorithms.
In this work, an adaptive and iterative integration strategy is presented to
evaluate the volume rendering integral for regular volumes, maintaining the
control of the step size for both internal and external integrals. A set of
computational experiments were made comparing both accuracy and efficiency
against the Riemann summation with uniform step size. The proposed algorithm
generates accurate results, with competitive performance. The comparisons were
made using both CPU and GPU implementations.
[15_MSc_alves]
Luciana
Brasil Sondermann ALVES.
Um
estudo sobre a captura de objetos em movimento com múltiplas câmeras RGB-D para
efeitos especiais.
[Title in English: A study about capture of objects
in moving with mulpiple RGB-D].
M.Sc. Diss. Port. Presentation: 17/07/2015. 59 p. Advisor: Bruno Feijó.
Abstract: This dissertation is an investigation on the generation of visual
effects (VFX) from the capture of moving objects as very dense point clouds
using multiple low-cost RGB-D cameras. For this investigation, we use a
commercial software for particle rendering and some modules already developed by
the Department o VFX R&D of TV Glogo in partnership with ICAD/VisionLab
laboratory for the simultaneous capture of multiple MS Kinect cameras. In
the proposed production pipeline, a server synchronizes the shooting of multiple
cameras, unifies the clouds and generates a file in standard format (PRP or
PLY). This file is then is used for volumetric particle rendering with added
visual effects using the Krakatoa MX software for 3DS max. The goal is to shoot
scenes in such a way that the film director can later define the final scenes
with any camera path and adding visual effects.
[15_MSc_souza]
Luciano Sampaio
Martins de SOUZA.
Early vulnerability detection for
supporting secure programming. [Title in Portuguese: Detecão da vulnerabilidade
de segurança em tempo de programação com intuito de dar suporte à programação
segura].
MSc. Diss. Eng. Presentation: 15/01/2015. 132 p. Advisor: Alessandro Fabrício
Garcia.
Abstract: Secure programming is the practice of writing programs that are
resistant to attacks by malicious people or programs. Programmers of secure
software have to be continuously aware of security vulnerabilities when writing
their program statements. They also ought to continuously perform actions for
preventing or removing vulnerabilities from their programs. In order to support
these activities, static analysis techniques have been devised to find
vulnerabilities in the source code. However, most of these techniques are built
to encourage vulnerability detection a posteriori, only when developers have
already fully produced (and compiled) one or more modules of a program.
Therefore, this approach, also known as
late
detection, does not support secure programming
but rather encourages posterior security analysis. The lateness of vulnerability
detection is also influenced by the high rate of false positives, yielded by
pattern matching, the underlying mechanism used by existing static analysis
techniques. The goal of this dissertation is twofold. First, we propose to
perform continuous detection of security vulnerabilities while the developer is
editing each program statement, also known as early detection. Early
detection can leverage his knowledge on the context of the code being created,
contrary to late detection when developers struggle to recall and fix the
intricacies of the vulnerable code they produced from hours to weeks ago. Our
continuous vulnerability detector is incorporated into the editor of an
integrated software development environment. Second, we explore a technique
originally created and commonly used for implementing optimizations on
compilers, called data flow analysis, hereinafter referred as DFA. DFA
has the ability to follow the path of an object until its origins or to paths
where it had its content changed. DFA might be suitable for finding if an object
has a vulnerable path. To this end, we have implemented a proof-of-concept
Eclipse plugin for continuous vulnerability detection in Java programs. We also
performed two empirical studies based on several industry-strength systems to
evaluate if the code security can be improved through DFA and early
vulnerability detection. Our studies confirmed that: (i) the use of data flow
analysis significantly reduces the rate of false positives when compared to
existing techniques, without being detrimental to the detector performance, and
(ii) early detection improves the awareness among developers and encourages
programmers to fix security vulnerabilities promptly.
[15_MSc_souza]
Luiz Gustavo de SOUZA.
Estendendo a MoLIC para apoiar o
design de sistemas colaborativos.
[Title in English: Extending MoLIC to support the
design of collaborative systems]. M.Sc. Diss. Port. Presentation: 08/09/2015. 126 p. Advisor:
Simone Diniz Junqueira Barbosa.
Abstract: The field of
Computer-Supported Cooperative Work focuses on the understanding of
collaborative systems and methodologies for the design and development of such
systems. The 3C Collaboration Model divides the collaboration process into
communication, coordination and cooperation. Regarding Human-Computer
Interaction, different interaction models aim to support designers in the user
and system interaction design. Semiotic Engineering views the design and use of
technology as communication processes. It contributes with interaction design
models, such as MoLIC, a language that supports the design of the interaction
between the user and the designer’s deputy (the user interface). The original
MoLIC language provides no support for collaborative systems design, raising the
need for a study that considers these limitations, addressing questions in order
to understand the interaction design of collaborative systems based on the 3C
Model using MoLIC. The present work presents a review on MoLIC focusing
collaborative systems, presenting the extension MoLICC, whose effectiveness for
collaborative systems design we evaluated by conducting an empirical study with
users and analyzing the language using the Cognitive Dimensions of Notations
framework.
[15_PhD_afonso]
Luiz Marques AFONSO.
Communicative dimensions of
application programming interfaces (APIs).
[Title in Portuguese: Dimensôes Comunicativas de
Interfaces de Programação (APIs)].
Ph.D. Thesis. Eng. Presentation: 06/04/2015. 153 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: Application programming interfaces (APIs) have a central role in
software development, as programmers have to deal with a number of routines and
services that range from operating system libraries to large application
frameworks. In order to effectively use APIs, programmers should have good
comprehension of these software artifacts, making sense of the underlying
abstractions and concepts by developing an interpretation that is compatible
with the designer's intent. Due to the complexity of today's systems and
programming environments, learning and using an API properly can be non-trivial
task to many programmers. Traditionally, studies on API design have been
developed from a usability standpoint. These studies have provided evidence that
bad APIs may a_ect programmer's productivity and software quality, offering
valuable insights to improve the design of new and existing APIs. This thesis
proposes a novel approach to investigate and discuss API design, based on a
communication perspective under the theoretical guidance of Semiotic
Engineering. From this perspective, an API can be viewed as a communication
process that takes place between designer and programmer, in which the former
encodes a message to the latter about how to communicate back with the system
and use the artifact's features, according to its design vision. This approach
provides an account of API design space that highlights the pragmatic and
cognitive aspects of human communication mediated by this type of software
artifact. By means of the collection and qualitative analysis of empirical data
from bug repositories and other sources, this research work contributes to a
deeper comprehension of the subject, providing an epistemic framework that
intends to support the analysis, discussion and evaluation of API design.
[15_MSc_costa]
[15_MSc_nascimento]
Marcelo Arza Lobo da COSTA.
Análise
de duty-cycling para economia de energia na disseminação decódigo em rede de
sensores.
[Title in English: Analysis of duty-cycling for
saving energy in code dissemination over sensor networks]. M.Sc. Diss. Port. Presentation: 08/09/2015.
49 p. Advisor: Noemi de La Roque Rodriguez.
Abstract: One of the key challenges in wireless sensor networks (WSN) is to
save energy at motes. One method to save battery is radio duty cycling (DC),
which keeps the radio turned off in most of the time and turns the radio on for
a short time to verify if any there are any messages. DC is frequently used in
monitoring applications where only one message is transmitted after the mote
reads its sensor. Usually the mote reads its sensor only once every few minutes,
so few unicast messages are transmitted in the network per time unit. This work
analyzes the use of the DC method in code dissemination. In this context,
multiple broadcast messages are transmitted in a short time. We examined two
specific dissemination algorithms, one of them proposed for a virtual machine
environment, in which the disseminated code is a small script, and a second one
originally proposed for disseminating the code of an entire application,
typically much larger than a script. The objective of this study is to evaluate
the impact of DC on latency and how much energy was saved when compared to
leaving the radio on all the time, which is how both algorithms work in their
original form.
Marcelo
de Mattos NASCIMENTO. Utilizando reconstrução 3D densa para odometria visual
baseada em técnicas de structure from motion.
[Title in English: Using dense 3D reconstruction for visual odometry
based on structure from motion techniques]. M.Sc. Diss. Port. Presentation: 17/09/2015.
59 p. Advisors: Alberto
Barbosa Raposo.
Abstract: Aim of intense research in the field computational
vision, dense 3D reconstruction achieves an important landmark with first
methods running in real time with millimetric precision, using RGBD cameras and
GPUs. However these methods are not suitable for low computational resources.
Having low computational resources as requirement, the goal of this work is to
show a method of visual odometry using regular cameras, without using a GPU. The
proposed method is based on technics of sparse Structure From Motion (SFM),
using data provided by dense 3D reconstruction. Visual odometry is the process
of estimating the position and orientation of an agent (a robot, for instance),
based on images. This dissertation compares the proposed method with the
odometry calculated by Kinect Fusion. Results of this research are applicable in
augmented reality. Odometry provided by this work can be used to model a camera
and the data from dense 3D reconstruction, can be used to handle occlusion
between virtual and real objects.
[15_PhD_alencar]
Marcus Franco Costa de ALENCAR.
A Context-aware model for
decision-making within multi-expertise tactical planning in a major incident.
[Title in Portuguese: Modelo ciente de contexto para
tomada de decisão no planejamento tático multi-especialidade em um grande
incidente].
Ph.D. Thesis. Eng. Presentation: 27/02/2015. 112 p. Advisors: Alberto
Barbosa Raposo and Simone Diniz Junqueira Barbosa.
Abstract: Activities that involve very complex and unpredictable problems
are still relying on human decision-making and manual tools like paper forms.
This is still the case for decision-making within tactical planning in a major
incident, where the collective knowledge of multi-expertise specialists is
essential to make urgent and effective decisions. Specialists tend to reject
tools that jeopardize the decision agility required, as it can put human lives
at risk and cause major damages to the environment and property. Communications
and management are challenges, as these specialists have each their own “expert
language”. This thesis proposes a planning model that keeps decisions with
specialists and allows them to use their own expressions for tactical planning,
but incorporates expressions’ context traceability, authority control over these
expressions, and expressions storage for reuse. These features were implemented
in a Web tool that aims at empowering decision-making specialists, but tries to
preserve the attributes of paper forms. The Web tool was evaluated in oil & gas
industry emergency scenarios, and evaluation results indicated that the model’s
approach enables important improvements in tactical planning communications and
management within this context.
[15_MSc_lami]
Milena
Ossorio LAMI.
Um assistente dirigido por modelos para auxílio ao desenvolvimento
de aplicações WWW.
[Title in English: A Model-driven wizard to aid in
developing Web applications].
M.Sc. Diss. Port. Presentation: 16/03/2015. 96 p. Advisor: Daniel Schwabe.
Abstract: Web applications can be seen as examples of hypermedia
applications. Developing such applications is a complex endeavor, even when
using design methods. There are model-driven methods aimed at helping the
designer, but they still require a steep learning curve for those unfamiliar
with the models. This work addresses this problem through a model-driven wizard
that helps the designer through the use of examples and concrete data-driven
interfaces. This wizard uses direct manipulation techniques to help easing the
designer’s tasks.
[15_MSc_moreira]
Nara Torres MOREIRA. A MIP-based approach to
solve a real-world school timetabling problem.
[Title in Portuguese: Uma abordagem baseada em
programação inteira mista para resolver um problema do mundo real de geração de
grades horárias escolares].
M.Sc. Diss. Port. Presentation: 21/05/2015. 136 p. Advisor: Marcus
Vinicius Soledade Poggi de Aragão.
Abstract: Timetabling problems look to schedule meetings in order to satisfy
a set of demands, while respecting additional constraints. In a good solution
the resulting timetables are acceptable to all people and resources involved. In
school timetabling, a given number of lectures, involving students, teachers and
classrooms, need to be scheduled over the week, while having to satisfy
operational, institutional, pedagogical and personal restrictions. The di_culty
of the problem has driven many researchers to work on solving approaches for it
since the early 1960's. Finding an actual solution to a real world scenario
implies satisfying many quality requirements and not ignoring the political
issues, which turns the classical problem much more intricate. This work
describes an approach based on mixed integer programming (MIP) developed for
solving a real-world school timetabling problem and discusses ideas and issues
faced during solution deployment phase for some Brazilian schools. In contrast
to other works on school timetabling, teaching sta_ sharing between distinct
school units are considered. Computational experiments were performed for
scenarios whose number of school units varies from 2 to 15, number of teachers
varies from 35 to 471 and number of classes varies from 16 to 295. Di_erent
strategies were combined aiming at converging to good solutions. Finally,
results are evaluated and the best approaches are highlighted.
[15_MSc_moreira]
Nathalia Moraes do NASCIMENTO.
FloT: an agent-based framework for selfadaptive and self-organizing Internet of
Things applications.
[Title in Portuguese:
FIoT: um framework baseado em agentes para
aplicações auto-organizáveis e autoadaptativas de Internet das Coisas].
M.Sc. Diss. Eng. Presentation: 31/08/2015. 102 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: The agreed fact about the Internet of Things (IoT) is that, within
the coming years, billions of resources, such as cars, clothes and foods will be
connected to the Internet. However, several challenging issues need to be
addressed before the IoT vision becomes a reality. Some open problems are
related to the need of building self-organizing and self-adaptive IoT systems.
To create IoT applications with these features, this work presents a Framework
for Internet of Things (FIoT). Our approach is based on concepts from
Multi-Agent Systems (MAS) and Machine Learning Techniques, such as a neural
network and evolutionary algorithms. An agent could have characteristics, such
as autonomy and social ability, which makes MAS suitable for systems requiring
self-organization (SO). Neural networks and algorithms of evolution have been
commonly used in robotic studies to provide embodied agents (as robots and
sensors)with autonomy and adaptive capabilities. To illustrate the use of FIoT,
we derived two different instances from IoT applications: (i) Quanti ed Things
and (ii) Smart Cities. We show how flexible points of our framework are
instantiated to generate an application.
[15_MSc_musa]
Pablo Martins MUSA. Profiling memory in Lua.
[Title in Portuguese: Analisando o uso de Memória em
Lua].
M.Sc. Diss. Port. Presentation: 19/06/2015. 89 p. Advisor: Roberto
Ierusalimschy.
Abstract: Memory bloat is a software problem that happens when the memory
consumption of a program exceeds the programmer's expectations. In many cases,
memory bloat hurts performance or even crashes applications. Detecting and
fixing memory bloat problems is a difficult task for programmers and, thus, they
usually need tools to identify and fix these problems. The past two decades
produced an extensive research and many tools to help programmers tackle memory
bloat, including memory profilers. Although memory profilers have been largely
studied in the last years, there is a gap regarding scripting languages. In this
thesis, we study memory profilers in scripting languages. First, we propose a
classification in which we divide memory profilers in manual and automatic,
based on how the programmer uses the memory profiler. Then, after reviewing
memory profilers available in three different scripting languages, we experiment
some of the studied techniques by implementing two automatic memory profilers to
help Lua programmers deal with memory bloat. Finally, we evaluate our tools
regarding how easy it is to incorporate them to a program, how useful their
reports are to understand an unknown program and track memory bloats, and how
much overhead they impose.
[15_PhD_pampanelli]
Patrícia Cordeiro Pereira PAMPANELLI.
Suavização de dados de amplitude através de difusão
anisotrópica com preservação de feições sísmicas.
[Title in English:
Seismic amplitude smoothing by anisotropic diffusion preserving structural
features].
Ph.D. Thesis. Port. Presentation: 13/08/2015. 95 p. Advisor: Marcelo
Gattass.
Abstract: Seismic interpretation can be viewed as a set of methodologies to
enhance the understanding of the structural and stratigraphic model of a given
region. During this process, the interpreter analyzes the seismic imaging
seeking to identify geological structures such as faults, horizons and channels,
among others. Given the low signal to noise ratio, the algorithms that
support the interpretation require a pre-processing stage where the noise is
reduced. This thesis proposes a new filtering method based on the anisotropic
diffusion of the amplitude field. The formulation of the diffusion process
proposed here uses seismic attributes to identify horizons and faults that are
preserved in the diffusion process. The proposed method implemented in this
thesis also presents results applied to real and synthetic data. Based on these
results, we present an analysis of the influence of the proposed method in
correlation measurements over horizons previously tracked. Finally the thesis
presents some conclusions and suggestions for future work.
[15_MSc_diniz]
Pedro Henrique Fonseca da Silva DINIZ.
A spatio-temporal model for average speed prediction on roads.
[Title in Portuguese: Um modelo espaço-temporal para
a previsão de velocidade média em estradas].
M.Sc. Diss. Eng. Presentation: 21/08/2015. 75 p. Advisor: Hélio Côrtes
Vieira Lopes.
Abstract: Many factors may influence a vehicle in speed in a road, but two
of them are usually observed by many drivers: its location and the time of the
day. To obtain a model that returns the average speed as a function of
position and time is still a challenging task. The application of this models
can be in different scenarios, such as: estimated time of arrival, shortest
route paths, traffic prediction, and accident detection, just to cite a few.
This study proposes a prediction model based on spatial-temporal partitions and
mean/instantaneous speeds collected from GPS data. The main advantage of
the proposed model is that it is very simple to compute. Moreover,
experimental results obtained from fuel delivery trucks, along the whole year of
2013 in Brazil, indicate that most of the observations can be predicted using
this model within an acceptable error tolerance.
[15_PhD_riverasalas]
Percy Enrique RIVERA SALAS. OLAP2Datacube: an
on-demand transformation framework from OLAP to RDF data cubes.
[Title in Portuguese: OLAP2Datacube: Um framework
para transformações em tempo de execução de OLAP para cubos de dados em RDF].
Ph.D. Thesis. Eng. Presentation: 18/09/2015. 95 p. Advisor: Marco Antonio
Casanova.
Abstract: Statistical data is one of the most important sources of
information, relevant to a large number of stakeholders in the governmental,
scientific and business domains alike. A statistical data set comprises a
collection of observations made at some points across a logical space and is
often organized as what is called a data cube. The proper definition of the data
cubes, especially of their dimensions, helps processing the observations and,
more importantly, helps combining observations from different data cubes. In
this context, the Linked Data principles can be profitably applied to the
definition of data cubes, in the sense that the principles offer a strategy to
provide the missing semantics of the dimensions, including their values. In this
thesis we describe the process and the implementation of a mediation
architecture, called OLAP2DataCube On Demand, which helps describe and consume
statistical data, exposed as RDF triples, but stored in relational databases.
The tool features a catalogue of Linked Data Cube descriptions, created
according to the Linked Data principles. The catalogue has a standardized
description for each data cube actually stored in each statistical (relational)
database known to the tool. The tool offers an interface to browse the linked
data cube descriptions and to export the data cubes as RDF triples, generated on
demand from the underlying data sources. We also discuss the implementation of
sophisticated metadata search operations, OLAP data cube operations, such as
slice and dice, and data cube mashup operations that create new cubes by
combining other cubes.
[15_PhD_pereira]
Rafael Silva PEREIRA.
A cloud based real-time collaborative filtering architecture for short-lived
video recommendations.
[Title in Portuguese: Uma arquitetura de filtragem
colaborativa em tempo real baseada em nuvem para recomendação de vídeos efêmeros].
Ph.D. Thesis. Eng. Presentation: 11/12/2015. 86 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: This dissertation argues that the combination of collaborative
filtering techniques, particularly for item-item recommendations, with emergent
cloud computing technology can drastically improve algorithm efficiency,
particularly in situations where the number of items and users scales up to
several million objects. It introduces a real-time item-item recommendation
architecture, which rationalizes the use of resources by exploring on-demand
computing. The proposed architecture provides a real-time solution for computing
online item similarity, without having to resort to either model simplification
or the use of input data sampling. This dissertation also presents a new
adaptive model for implicit user feedback for short videos, and describes how
this architecture was used in a large scale implementation of a video
recommendation system in use by the largest media group in Latin America,
presenting results from a real life case study to show that it is possible to
greatly reduce recommendation times (and overall financial costs) by using
dynamic resource provisioning in the Cloud. It discusses the implementation in
detail, in particular the design of cloud based features. Finally, it also
presents potential research opportunities that arise from this paradigm shift.
[15_MSc_diniz]
Rafael DINIZ.
O Perfil NCL-DR e o
middleware Ginga para receptores do Sistema Brasileiro de Rádio Digital.
[Title in English: The NCL-DR profile and the Ginga
middleware for the Brazilian Digital Radio System].
M.Sc. Diss. Port. Presentation: 10/07/2015. 135 p. Advisor: Luiz Fernando
Gomes Soares.
Abstract: In 2010, the Minister of Communications instituted the Brazilian
Digital Radio System (SBRD), however a reference model for the system has not
yet been set. This text presents an analysis of the terrestrial radio
broadcasting in Brazil and presents some requirements for an interactive digital
radio. These requirements are then used to guide the research. The relevance and
importance of the use of NCL and Ginga in the Brazilian digital radio as in
Brazilian Digital TV System (SBTVD), are discussed, and the definition of how
the transport of NCL applications should be done in the two digital radio
systems being considered for adoption by the country is performed. A new profile
of NCL for use in digital radio is defined. This new profile was named Digital
Radio Profile, or just DR. Ginga is also defined for use in digital radio
receivers, and new media players and features adapted to the digital radio
context are introduced. PUC-Rio’s implementation of the Ginga middleware is
presented with the necessary modifications for digital radio use. In addition,
an environment to validate the interactivity in digital radio with the Ginga was
assembled and tests exercising the entire digital radio chain of transmission
and reception with embedded NCL applications were performed. The definitions and
conclusions that resulted from research activities are expected to contribute to
the definition of a Brazilian Digital Radio System that is powerful in
communicational terms and aggregates the most relevant technologies for the
medium in the digital age.
[15_MSc_oliveira]
Rafael Pereira OLIVEIRA.
Sintonia fina
baseada em ontologia: o caso de visões materializadas.
[Title in English: Ontology-based database tuning:
the case of materialized views].
M.Sc. Diss. Port. Presentation: 25/03/2015. 101 p. Advisor: Sérgio
Lifschitz.
Abstract: The Outer-tuning framework may be used to support automatic (or
not) database tuning, particularity index. It is an approach that offers
transparency about the available alternatives to feasible tuning scenarios,
making it possible to combine either independent strategies or allow discussion
of justifications for actions performed in order to obtain better performances.
Using a specific ontology for fine tuning relational databases, we add semantics
to the process with the understanding of the concepts involved and generate
(semi)automatic new tuning actions, which can be inferred from existing
practices or new rules and concepts that arise in the future. This research
presents as an initial contribution the actual design and implementation of the
Outer-tuning framework through the formalization of a software architecture that
meets the specified functional requirements. This work also contributes with the
extension of the domain ontology and the inclusion of new heuristics to a task
ontology, in order to accomplish fine tuning solutions with the use of
materialized views. Thus, it becomes possible to propose the use of tuning
heuristics for indexes as well as for materialized views.
[15_PhD_brandao]
Rafael Rossi de Mello BRANDÃO.
A Capture & Access technology to support documentation and tracking of
qualitative research applied to HCI.
[Title in Portugues: Uma tecnologia de Captura &
Acesso para suportar documentação e rastreamento de pesquisa qualitativa
aplicada a IHC].
Ph.D. Thesis. Eng. Presentation: 08/04/2015. 153 p. Advisor: Clarisse
Sieckenius de Souza.
Abstract: Tracking and exposure of qualitative methodology procedures is a
problem observed in the scientific community. The traditional form of research
publication makes it impractical to provide in detail all the decisions and
evidences considered in the course of a qualitative research. To overcome this
problem we propose an approach to structure all the procedures undertaken into
hypermedia documents with analyses and validations, allowing its representation
in a theoretical Capture & Access (C&A) model. This model enables the outlining
of the research inquiry, providing semantics to allow relationship between key
elements in a qualitative methodology. We discuss about five qualitative studies
that guided the reasoning about the proposed model, pondering on how to register
adequately the activities performed in HCI evaluations consolidating the
collected data in documents used in posterior analysis sessions. Additionally,
we present a proof of concept through an implementation using the C&A software
infrastructure offered by the CAS Project. This infrastructure supports the
recording of empirical data (text, images, audio, video, and slides), data
post-processing and the generation of multimedia documents. It is possible to
use tags for temporal annotation, create contexts to link data and retrieve
other relevant information from the captured investigation processes.
[15_PhD_gomes]
Raphael do Vale Amaral GOMES. Crawling the
linked data cloud.
[Title in Portuguese: Coleta de dados interligados].
Ph.D. Thesis. Eng. Presentation: 12/05/2015. 118 p. Advisor: Marco Antonio
Casanova.
Abstract: The linked data best practices recommend to publish a new
tripleset using well-known ontologies and to interlink the new tripleset with
other triplesets. However, both are difficult tasks. This thesis describes
frameworks for metadata crawlers that help selecting the ontologies and
triplesets to be used, respectively, in the publication and the interlinking
processes. Briefly, the publisher of a new tripleset first selects a set of
terms that describe the application domain of interest. Then, he submits the set
of terms to a metadata crawler, constructed using one of the frameworks
described in the thesis, that searches for triplesets which vocabularies include
terms direct or transitively related to those in the initial set of terms. The
crawler returns a list of ontologies that are used for publishing the new
tripleset, as well as a list of triplesets with which the new tripleset can be
interlinked. Hence, the crawler focuses on specific metadata properties,
including subclass of, and returns only metadata, which justifies the
classification “metadata focused crawler.
[15_MSc_rottgen]
Raphael Alexander ROTTGEN.
Institutional
ownership as a predictor of future security returns.
[Title in English: Uso de dados das carteiras de
investidores institucionais na predição de retornos de ações].
M.Sc. Diss. Eng. Presentation: 16/06/2015. 96 p. Advisor: Eduardo Sany
Laber.
Abstract: Data on institutional ownership of securities is nowadays publicly
available in a number of jurisdictions and can thus be used in models for the
prediction of security returns. A number of recently launched investment
products explicitly use such institutional ownership data in security selection.
The purpose of the current study is to apply statistical learning algorithms to
institutional ownership data from the United States, in order to evaluate the
predictive validity of features based on such institutional ownership data with
regard to future security returns. Our analysis identified that a support vector
machine managed to classify securities, with regard to their four-quarter
forward returns, into three bins with significantly higher accuracy than pure
chance would predict. Even higher accuracy was achieved when "predicting"
realized, i.e. past, fourquarter returns.
[15_PhD_berardi]
Rita Cristina Galarraga BERARDI.
Design
rationale in the triplification of relational databases.
[Title in Portuguese: Design rationale na
triplificação de bancos de dados relacionais].
Ph.D. Thesis. Eng. Presentation: 04/02/2015. 100 p. Advisor: Marco Antonio
Casanova.
Abstract: One of the most popular strategies to publish structured data on
the Web is to expose relational databases (RDB) in the RDF format. This process
is called in RDB-to-RDF or triplification. Furthermore, the Linked Data
principles offer useful guidelines for this process. Broadly stated, there are
two main approaches to map relational databases into RDF: (1) the direct mapping
approach, where the database schema is directly mapped to an RDF schema; and (2)
the customized mapping approach, where the RDF schema may significantly differ
from the original database schema. In both approaches, there are challenges
related to the publication and to the consumption of the published data. This
thesis proposes the capture of design rationale as a valuable source of
information to minimize the challenges in RDB-to-RDF processes. Essentially, the
capture of design rationale increases the awareness about the actions taken over
the relational database to map it as an RDF dataset. The main contributions of
this thesis are: (1) a design rationale (DR) model adequate to RDB-to-RDF
processes, independently of the approach (direct or customized) followed; (2)
the integration of a DR model in an RDB-to-RDF direct mapping process and in an
RDB-to-RDF customized mapping process using the R2RML language; (3) the use of
the DR captured to improve the recommendations for vocabularies to reuse.
[15_PhD_silva]
Rodrigo Marques Almeida da SILVA.
Um método otimizado de renderização fotorealista com distribuição estatística e
seleção automática de técnicas.
[Title in Portuguese: An optimized photorealistic
rendering method with statistic distribution and automatic rendering technique
selection].
Ph.D. Thesis. Eng. Presentation: 09/04/2015. 180 p. Advisor: Bruno Feijó.
Abstract: The photorealistic rendering process for cinema and TV
increasingly demands processing power, requiring fast parallel algorithms and
effective task distribution systems. However, the processes currently used by
the academia and by the industry still consume several days to evaluate an
animation in super-resolution (typically 8K), what makes difficult the
improvement of artistic quality and limits the number of experiments with scene
parameters. In this work, we focus on the optimization of three processes
involved in photorealistic rendering, reducing the total time of rendering
substantially. Firstly, we optimize the local rendering, in which the system
works to render a set of pixels optimally, taking advantage of the available
hardware resources and using previous rendering data. Secondly, we optimize the
management process, which is changed not only to distribute frames but also to
analyze all the rendering infrastructure, optimizing the distribution process
and allowing the establishment of goals as time and cost. Furthermore, the
management model is expanded to the cloud, using the cloud as a processing
overflow. Thirdly, we propose a new optimized process to evaluate the rendering
task collaboratively, where each node communicates partial results to other
nodes, allowing the optimization of the rendering process in all nodes.
Altogether, this thesis is an innovative proposal to improve the whole process
of high-performance rendering, removing waste of resources and reducing rework.
[15_MSc_magalhaes]
Rômulo
de Carvalho MAGALHÃES.
Operations over lightweight
ontologies. [Title in Portuguese: Operações sobre ontologias leves].
MSc. Diss. Eng. Presentation: 30/01/2015. 106 p. Advisor: Marco Antonio
Casanova.
Abstract: This work addresses ontology design problems by treating
ontologies as theories and by defining a set of operations that map ontologies
into ontologies, including their constraints. The work first summarizes the base
knowledge needed to define the class of ontologies used and proposes four
operations to manipulate them. It then shows how the operations work and how
they may help design new ontologies. The core of this work is describing the
implementation of the operations over a Protégé plug-in, detailing the
architecture and including case-use examples.
[15_PhD_araujo]
Thiago Pinheiro de ARAÚJO.
Using runtime information and maintenance knowledge to assist failure diagnosis,
detection and recovery.
[Title in Portuguese: Utilizando informações da
execução do sistema e conhecimentos de manutenção para auxiliar o diagnóstico,
detecção e recuperação de falhas].
Ph.D. Thesis. Eng. Presentation: 07/10/2015. 192 p. Advisor: Arnd von Staa.
Abstract: Even software systems developed with strict quality control may
expect failures during their lifetime. When a failure is observed in a
production environment the maintainer is responsible for diagnosing the cause
and eventually removing it. However, considering a critical service this might
demand too long a time to complete, hence, if possible, the failure signature
should be identified in order to generate a recovery mechanism to automatically
detect and handle future occurrences until a proper correction can be made. In
this thesis, recovery consists of restoring a correct context allowing
dependable execution, even if the causing fault is still unknown. To be
effective, the tasks of diagnosing and recovery implementation require detailed
information about the failed execution. Failures that occur during the test
phase run in a controlled environment, allow adding specific code
instrumentation and usually can be replicated, making it easier to study the
unexpected behavior. However, failures that occur in the production environment
are limited to the information present in the first occurrence of the failure.
But run time failures are obviously unexpected, hence run time data must be
gathered systematically to allow detecting, diagnosing with the purpose of
recovering, and eventually diagnosing with the purpose of removing the causing
fault. Thus there is a balance between the detail of information inserted as
instrumentation and the system performance: standard logging techniques usually
present low impact on performance, but carry insufficient information about the
execution; while tracing techniques can record precise and detailed information,
however are impracticable for a production environment. This thesis proposes a
novel hybrid approach for recording and extracting system’s runtime information.
The solution is based on event logs, where events are enriched with contextual
properties about the current state of the execution at the moment the event is
recorded. Using these enriched log events a diagnosis technique and a tool have
been developed to allow event filtering based on the maintainer’s perspective of
interest. Furthermore, an approach using these enriched events has been
developed that allows detecting and diagnosing failures aiming at recovery. The
proposed solutions were evaluated through measurements and studies conducted
using deployed systems, based on failures that actually occurred while using the
software in a production context.
[15_PhD_silva]
Thuener Armando da SILVA. Optimization under uncertainty
for asset allocation.
[Title in Portuguese: Otimização sob incerteza para
alocação de ativos].
Ph.D. Thesis. Presentation: 06/04/2015. Eng. 99 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão and Michel Davi Valladão.
Abstract: Asset allocation is one of the most important financial decisions
made by investors. However, human decisions are not fully rational, and people
make several systematic mistakes due to overconfidence, irrational loss aversion
and misuse of information, among others. In this thesis, we developed two
distinct methodologies to tackle this problem. The first approach has a more
qualitative view, trying to map the investor's vision of the market. It tries to
mitigate irrationality in decision-making by making it easier for an investor to
demonstrate his/her preferences for specific assets. This first research uses
the Black-Litterman model to construct portfolios. Black and Litterman developed
a method for portfolio optimization as an improvement over the Markowitz model.
They suggested the construction of views to represent an investor's opinion
about future stocks' returns. However, constructing these views has proven
difficult, as it requires the investor to quantify several subjective
parameters. This work investigates a new way of creating these views by using
Verbal Decision Analysis. The second research focuses on quantitative methods to
solve the multistage asset allocation problem. More specifically, it modifies
the Stochastic Dynamic Dual Programming (SDDP) method to consider real asset
allocation models. Although SDDP is a consolidated solution technique for
large-scale problems, it is not suitable for asset allocation problems due to
the temporal dependence of returns. Indeed, SDDP assumes a stagewise
independence of the random process assuring a unique cost-to-go function for
each time stage. For the asset allocation problem, time dependency is typically
nonlinear and on the left-hand side, which makes traditional SDDP inapplicable.
This thesis proposes an SDDP variation to solve real asset allocation problems
for multiple periods, by modeling time dependence as a Hidden Markov Model with
concealed discrete states. Both approaches were tested in real data and
empirically analyzed. The contributions of this thesis are the methodology to
simplify portfolio construction and the methods to solve real multistage
stochastic asset allocation problems.
[15_MSc_faria]
Waldecir Vicente FARIA.
D-Engine: a framework for the random execution of plans in agent-based models.
[Title in Portuguese: D-Engine: um framework para a
execução aleatória de planos em modelos baseados em agentes].
M.SC. Diss. Eng. Presentation: 10/07/2015. 72 p. Advisor: Hélio Cortes Vieira
Lopes.
Abstract: An important question in agent-based systems is how to execute
some planned action in a random way. The answer for this question is fundamental
to keep the user's interest in some product, not just because it makes the
experience less repetitive but also because it makes the product more realistic.
This kind of action execution can be mainly applied on simulators, serious and
entertainment games based on agent models. Sometimes the randomness can be
reached by just generating random numbers. However, when creating a more complex
product, it is recommended to use some statistical or stochastic knowledge to
not ruin the product's consumption experience. In this work we try to give
support to the creation of dynamic and interactive animation and story using an
arbitrary model based on agents. Inspired on stochastic methods, we propose a
new framework called D-Engine, which is able to create a random, but with a
well-known expected behavior, set of timestamps describing the execution of an
action in a discrete way following some specificate. While these timestamps
allow us to animate a story, an action or a scene, the mathematical results
generated with our framework can be used to aid other applications such as
result forecasting, nondeterministic planning, interactive media and
storytelling. In this work we also present how to implement two different
applications using our framework: a duel scenario and an interactive online
auction website.
[15_MSc_oizumi]
Willian Nalepa OIZUMI. Synthesis of code
anomalies: revealing design problems in the source code.
[Title in Portuguese: D-Engine: um framework para a
execução aleatória de planos em modelos baseados em agentes].
M.SC. Diss. Eng. Presentation: 02/09/2015. 103 p. Advisor: Alessandro Fabricio
Garcia.
Abstract: Design problems affect almost all software projects and make their
maintenance expensive and impeditive. As design documents are rarely available,
programmers often need to identify design problems from the source code.
However, the identification of design problems is not a trivial task for several
reasons. For instance, the reification of a design problem tends to be scattered
through several anomalous code elements in the implementation. Unfortunately,
previous work has wrongly assumed that each single code anomaly { popularly
known as code smell { can be used as an accurate indicator of a design problem.
There is growing empirical evidence showing that several types of design
problems are often related to a set of inter-related code anomalies, the
so-called code-anomaly agglomerations, rather than individual anomalies only. In
this context, this dissertation proposes a new technique for the synthesis of
code-anomaly agglomerations. The technique is intended to: (i) search for varied
forms of agglomeration in a program, and (ii) summarize different types of
information about each agglomeration. The evaluation of the synthesis technique
was based on the analysis of several industry-strength software projects and a
controlled experiment with professional programmers. Both studies suggest the
use of the synthesis technique helped programmers to identify more relevant
design problems than the use of conventional techniques