Theses and Dissertations
2011
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2011. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 18/JUNE/2012
[11_MSc_branco]
Adriano Francisco BRANCO.
Um modelo de programação para RSSF com suporte à reconfiguração dinâmica de
aplicações. [Title
in English: A WSN programming model with a dynamic reconfiguration support].
M.Sc. Diss. Port. Presentation: 06/04/11. 98 p. Advisors: Noemi de La
Rocque Rodriguez and Silvana Rosseto (UFF).
Abstract: Some basic
characteristics of wireless sensor networks (WSN) make application creation and
reconfiguration dificult tasks. A programming model is presented to simplify
these tasks. This model is based on a set of parametrized components and on a
Finite State Machine, and allows the remote configuration of different
applications over the same set of installed components. We describe some tests
to evaluate its impact on the development process, and the ease of applying
modifications to a running application. We also measure the additional impact of
remote configuration on network activity.
[11_MSc_silva]
Aleksander Medella Campos da SILVA.
Uma
biblioteca de componentes de software para simuladores de radar. [Title
in English: A software component library for radar simulators].
M.Sc. Diss. Port. Presentation: 24/08/11. 95 p. Advisors: Renato Fontoura de
Gusmão Cerqueira and Edward Hermann Haeusler.
Abstract: Nowadays Radar systems are becoming complex systems. Many radars
are constituted by an array of elements, where a minimum of coordination among
the elements functionality is need. The task to build a prototype in order to
validate a radar design is an expensive one. On the other hand, there are many
common features shared among diferent radar architectures. This dissertation
describes the architecture of a simulator that is able to represent most of the
radar systems designed according the basic principles of radars. The simulator
is designed following a exible component based architecture and five quite
diferent kind of radars are described and simulated using the presented
architecture. Real scenarios are taken into account in the validation of the
simulations.
[11_MSc_valeriano]
Allan Alves VALERIANO.
Um
mecanismo de seleção de componentes para o middleware Kaluana usando a noção de
contratos de reconfiguração. [Title
in English: A mechanism of component selection with the notion of
reconfiguration contracts].
M.Sc. Diss. Port. Presentation: 08/04/11. 87 p. Advisor: Markus Endler.
Abstract: Mobile computing creates the need for applications to be adaptable
according to the user's context. Specific user demands as well as changes in the
computational context the mobile applications require clients to be able to
adapt dynamically to suit the new scenario of execution. These adjustments
should be appropriate and should maintain the quality of service avoiding
failures or preventing degradation of application performance. This thesis
proposes an extension of Kaluana middleware which provides a mechanism for
selection of components for adaptative applications based on the notion of
reconfigurations contracts. This selection is done also based on the notion of
equivalence between their public interfaces and should consider the execution
restrictions of the candidate components to be used in the adaptation in accord
to the device's execution context to be used to evaluate the candidate
components to the instantiation. This selection aims to maintain the
compatibility of new
components with the components already used as well as the execution context,
i.e. the current status of the device's resources. Due to the notion of
equivalence between the interface specifications of components, the application
should be able to request a component through the interface of the requested
service, avoiding the need to know the component's name or any other specific
feature that creates a tie with the given implementation.
[11_MSc_rocha]
Allan Carlos Avelino ROCHA.
Visualização volumétrica ilustrativa de malhas não estruturadas. [Title
in English: Illustrative volume visualization for unstructured meshes].
M.Sc. Diss. Port. Presentation: 20/06/11. 54 p. Advisor: Waldemar Celles
Filho.
Abstract: Scientific visualization techniques create images attempting to
reveal complex structures and phenomena. Illustrative techniques have been
incorporated to scientific visualization systems in order to improve the
expressiveness of such images. The rendering of feature lines is an important
technique for better depicting surface shapes and features. In this thesis, we
propose to combine volume visualization of unstructured meshes with illustrative
isosurfaces. This is accomplished by extending a GPU-based ray-casting algorithm
to incorporate illustration with photic extremum lines, a type of feature lines
able to capture sudden changes of luminance, conveying shapes in a perceptually
correct way.
[11_MSc_reis]
André Luiz Castro de
Almeida REIS.
Editor de diagramas dirigido
por metamodelos. [Title
in English: Diagram editor driven by metamodels].
M.Sc. Diss. Port. Presentation: 26/08/11. 111 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract: The use of diagram editors has been very useful for creating
design solutions in the area of human-computer interaction. They facilitate the
use of modeling languages and provide control over the elements of the solution
space, preventing the user from using an invalid lexical item of the chosen
language. The elements are defined in a metamodel, which basically consists of a
set of concepts with-in a given domain. As result,
users gain speed and reliability in the process of creation. However, many
editor do not guarantee that designed solution meets the language syntax. For
this, we need an editor that, in addition to having control over the language
symbols, also provides support for the use of models, going beyond graphical
editing and also making use of the syntax rules defined in each metamodel. With
this set of rules that define the from of language, the user may be warned of
possible rule violations while building the solution. The rules discribe the
syntax of the language through its grammar. To parse a diagram means to try to
find a sequence of applications of rules that derive from a grammar or some
representation of it. Considering this approach, this dissertation presents a
study on diagram editors, and a metamodel-driven tool that allows the user to,
by defining a metamodel, make use of a generic diagram editor for
visuallanguages that can control the vocabulary and grammar of the created
diagrams.Thus the goal of current research is to propose a tool that encompasses
these solutions and is focused on visual languages
commom in the area of human-computer interaction, such as MoLIC, CTT and
statechart.
[11_MSc_carvalho]
Andréa Weberling CARVALHO.
Criação automática de
visões materializadas em SGBDs relacionais. [Title
in English: Automatic creation of materialized views in relational DBMS].
M.Sc. Diss. Port. Presentation: 20/04/11. 83 p. Advisor: Edward Hermann Haeusler.
Abstract: As database applications become more complex, tuning database
systems in order to improve query response times also become harder. One could
consider materialized views, a relational database object that stores data
resulting from specific queries, to obtain better performances. This
dissertation proposes the automatic creation of materialized views. A
non-intrusive architecture is used in order to keep DBMS source codes unchanged.
There is a need to estimate the creation cost and the benefits obtained from the
presence of the materialized views. Heuristics are proposed to help with the
automatic decision upon creation of these materialized views for a given
workload. Simulation tests for TPC-H
benchmark and MS SQL Server DBMS are presented.
[11_MSc_monteiro]
Andrei Alhadeff MONTEIRO.
Many-core fragmentation silmulation. [Title
in Portuguese: Implementação de fragmentação em arquitetura de
multi-processadores].
M.Sc. Diss. Port. Presentation: 17/08/11. 59 p. Advisor: Waldemar Celles
Filho.
Abstract: A GPU-based computational framework is presented to deal with
dynamic failure events simulated by means of cohesive zone elements. The work is
divided into two parts. In the first part, we deal with pre-processing of the
information and verify the effectiveness of dynamic insertion of cohesive
elements in large meshes. To this effect, we employ a simplified topological
data structured specialized for triangles. In the second part, we present an
explicit dynamics code that implements an extrinsic cohesive zone formulation
where the elements are inserted on-the-fly, when needed and where needed. The
main challenge for implementing a GPU-based computational framework using
extrinsic cohesive zone formulation resides on being able to dynamically adapt
the mesh in a consistent way, inserting cohesive elements on fractured facets.
In order to handle that, we extend the conventional data structure used in
finite element code (based on element incidence) and store, for each
element, references to the adjacent elements. To avoid concurrency on accessing
shared entities, we employ the conventional strategy of graph coloring. In a
pre-processing phase, each node of the dual graph (bulk element of the mesh) is
assigned a color different to the colors assigned to adjacent nodes. In that way,
elements of a same color can be processed in parallel without concurrency. All
the procedures needed for the insertion of cohesive elements along fracture
facets and for computing node properties are performed by threads assigned to
triangles, invoking one kernel per color. Computations on existing cohesive
elements are also performed based on adjacent bulk elements.
[11_MSc_souza]
Bruno de Figueiredo Melo e SOUZA.
Modelos de fatoração matricial para recomendação de vídeos. [Title
in English: Matrix factorization models for vídeo recommendation].
M.Sc. Diss. Port. Presentation: 29/08/11. 67 p. Advisor: Ruy Luiz Milidiú.
Abstract:
Item recommendation from implicit feedback datasets consists of
passively tracking different sorts of user behavior, such as purchase history,
watching habits and browsing activities in order to improve customer experience
through providing personalized recommendations that fits into users' taste. In
this work we evaluate the performance of different matrix factorization models
tailored for the recommendation task for the implicit feedback dataset extracted
from Globo.com's video site's access logs. We propose treating the data as
indication of a positive preference from a user regarding the video watched.
Besides that we evaluated the impact of effects associated with either users or
items, known as biases or intercepts, independent of any interactions and its
time changing behavior throughout the life span of the data in the result of
recommendations. We also suggest a scalable and incremental procedure, which
scales linearly with the input data size. In trying to predict the intention of
the users for consuming new videos our best factorization models achieves a RMSE
of 0,0524 using user's and video's bias as well as its temporal dynamics.
[11_PhD_gadelha]
Bruno Freitas GADELHA.
Uma
abordagem de desenvolvimento de groupware baseada em linha de produto de
software e modelo 3C de colaboracao. [Tittle in English: An approach for
groupware development based on software product lines and the3C collaboration
model]. Ph.D Thesis. Port. Presentation: 21/12/2011
101 p. Advisor: Hugo Fuks.
Abstract: In this thesis we explore software
development on the context of groupware, specifically on supporting
collaborative learning. Groupware development is not a trivial task given that
technological and social issues are involved. Considering the technological
issues, a huge amount of time is wasted on implementing infrastructure aspects
leaving little time for implementation of innovative solutions on collaboration.
Considering the social issues, we should take into account that group work is
dynamic and that group composition changes over time. So, we developed a
software product line for groupware based on the 3C Collaboration Model. The
groupware derivation process starts with the formalization of the collaborative
learning techniques in collaboration scripts. In order to support this
collaboration process we developed the GroupwareBuilder, that reads the
collaboration script and derives groupware tailored to the tasks described on
the script. We made a functional evaluation and a case study. On the functional
evaluation, we aimed on getting a proof of concept for GroupwareBuilder by
deriving groupware for supporting the “Critical Debate” and “Buzz Groups”
collaboration scripts. In order to analyze how GroupwareBuilder derives
groupware from other collaborative
learning techniques described by different teachers we made a case study.
The main contribution of this thesis is an approach that enables the derivation
of groupware and the customization of groupware in runtime from collaboration
scripts written by the users, and not from a list of software requirements as
used in other SPLs approaches.
[11_MSc_lima]
Bruno Seabra Nogueira
Mendonça LIMA. Composer: aspectos não-funcionais em um ambiente de autoria para
aplicações NCL. [Title
in English: Composer: non-requirements aspects in an authoring environment to
NCL applications].
M.Sc. Diss. Port. Presentation: 15/04/11. 81 p. Advisor: Luiz Fernando Gomes
Soares.
Abstract: The chain of work present in the creation, development
and transmission of hypermedia content includes several actors, environments and
equipment, from the content creator, passing by the application developer all
the way to the operator of the transmission service. Each of these actors is
immersed in different work environments and has specific roles in the creation
and editing of content being delivered to the final user. Nowadays, even
final users are demanding tools that can enrich their content. A single
authoring tool cannot meet these requirements and different actors. Currently,
there are authoring tools focused on satisfying a small subset of these actors.
But even this small part is not fully satisfied, since these tools were built,
most of the times, based only on functional requirements. This work discusses
the importance of non-functional aspects in the development of new hypermedia
authoring tools. This dissertation proposes an architecture that enables tools
to meet the specific requirements of each actor in the process of creating
hypermedia content. This architecture relies on extensibility, adaptability,
performance and scalability. In order to test the proposal of this work we have
developed an authoring tool for applications NCL (Nested Context Language) that
is based on the proposed architecture. NCL was chosen because it is the standard
language for the declarative system (Ginga-NCL) part of the Terrestrial Digital
TV of Brazil ISDB-TB and ITU-T IPTV services. NCL allows the authoring of
hypermedia documents in a simple and expressive form.
[11_PhD_avila]
Bruno Tenório ÁVILA.
Compressão de
números naturais, sequência de bits e grafos. [Title
in English: Compression of natural numbers, sequence of bits and
graphs].
Ph.D. Thesis. Port. Presentation: 16/09/11. 100 p. Advisor: Eduardo Sany Laber.
Abstract:
This thesis addresses the problems of compression for the following data types:
numbers, sequence of bits and webgraphs. For the problem of compression of a
sequence of bits, we demonstrate the relationship between merge algorithms and
binary source coders. Then, we show that the algorithms binary merge, recursive
merge and probabilistic merge, generate respectively an entropy coder based
runlengths encoded with the Rice code, the interpolative binary coder and the
random Rice coder, which is a new variant of the Rice code. For the problem of
webgraph compression, we propose a new compact representation for webgraphs,
entitled w-tree, built specifically for external memory (disk), being the first
one in this genre. We also propose a new type of layout designed specifically
for webgraphs, entitled scaled layout.In addition, we show how to build a
cache-oblivious layout to explore the hierarchy of memories, being the first of
its kind. We offer several types of queries that can be performed and it is the
first representation to support batched random read query execution and advanced
query optimization, including in main memory. Finally, we performed a series of
experiments showing that the w-tree provides compression rates and running times
competitive with other compact representations for main memory. Therefore, we
demonstrate empirically the feasibility of a compact representation for external
memory in practice, contrary to the assertion of several researchers.
[11_MSc_marques]
Carlos Roberto MARQUES JUNIOR.
Configuração colaborativa de linha de produtos de software. [Title
in English: Collaborative configuration of software product line].
M.Sc. Diss. Port. Presentation: 07/04/10. 98 p. Advisor: Carlos José Pereira de
Lucena.
Abstract: Product configuration is a key activity for enabling mass
customization. It corresponds to tailoring a software application from a
software product line, respecting individual customer requirements. In practice,
the product configuration activity is challenging, mainly because it involves
numerous stakeholders with different expertise making decisions. Several works
claim to decompose the configuration activity into pre-arranged stages, whereby
stakeholders can make their decisions in a separated way and coherent fashion.
However, due to the decentralization in geography of the knowledge required in
each stage and that customer requirements could be imprecise and change
frequently, the proposed solutions do not apply. To address these issues, this
work proposes a dynamic and collaborative product configuration approach based
on the personal assistant metaphor. Personal assistants in a cooperative way
coordinate stakeholders´ decisions and proactively perform tasks, such as
reasoning about the ramifications of decisions, integrating distributed
decisions and resolving divergent requirements. A tool prototype, called
Libertas, and two case studies that evaluate the applicability of our approach
are also presented. The first case study analyzes the configuration process to
configure an operational system to support the business requirements of an
enterprise. The second one addresses a scenario of a software product line for
web portals.
[11_MSc_campos]
Daniel de
Vasconcelos CAMPOS.
SisApC2: uma estratégia baseada em sistemas computacionais
móveis para apoiar atividades de comando e controle. [Title
in English: SisApC2: A strategy based on mobile computer systems to support
Command and Control activities].
M.Sc. Diss. Eng. Presentation: 29/03/11. 88 p. Advisors: Marcelo Gattass and
Roberto de Beauclair Seixas.
Abstract: The Command and Control theory of
John Boyd, a 20th century military strategist, is a powerful tool not only for
the military, but also for any civilian activities that require monitoring
people, vehicles, boats, or any other elements of interest. Nowadays computer
graphics techniques are important for the OODA loop (Observe–Orient–Decide–Act),
especially on observing and orienteering steps. They make these steps more
efficient and less prone to errors. The implementation of Command and Control
techniques with current technology is not a simple task. It involves the
implemantation of graphical, mobile and distributed components. Each of these
components can be implemented with a variety of technologies, that are often
incompatible with each other. This dissertation proposes a low cost framework
capable of effectively support the implemantation of a Command and Control
computer system. The framework is based on open source reliable technologies and
has proven versatile for different types of applications. The dissertation also
presents the implementation of two different applications with the proposed
technology to support its evaluation.
[11_MSc_conceicao]
Diêgo Bispo Conceição. Simulação
e estratégias de negociação de ações com agentes de software. [Title
in English: Simulation and Stock Trading Strategies with Software Agents].
M.Sc. Diss. Port. Presentation: 02/09/11. 82 p. Advisor: Carlos José Pereira de Lucena.
Abstract:
The financial market has presented significant growth in the automation of
decisions and execution of strategies that can achieve good returns from
investments. Consequently, the need for an increasingly robust and reliable
environment, allowing to analyze different investment strategies, has increased.
Based on this need, this work presents "A Multi-Agent System Framework For
Automated Stock Exchange Simulation" (FrAMEX), which allows the creation of
different simulators for the financial market based on the paradigm of software
agents. Intraday and interday simulators created from FrAMEx are presented in
the document. Besides the analysis of different investment strategies used in
such environments and executed by agents run from investors. Since these agents
achieved good performances in their executions, they participated in two
versions of the MASSES competition. Thus, the description of the performance of
each agent developed is also presented.
[11_MSc_cardoso]
Eduardo Teixeira CARDOSO.
Efficient methods for
information extraction in news webpages. [Title
in Portuguese: Métodos eficientes para extração de informação em páginas de
notícias].
M.Sc. Diss. Port. Presentation: 24/08/11. 58 p. Advisor: Eduardo Sany Laber.
Abstract:
We tackle the task of news webpage segmentation, specifically identifying the
news title, publication date and story body. While there very good results in
the literature, most of them rely on webpage rendering, which is a very
time-consuming step. We focus on scenarios with a high volume of
documents, where a short execution time is a must. The chosen approach extends
our previous work in the area, combining structural properties with hints of
visual presentation styles, computed with a faster method than regular rendering,
and machine learning algorithms. In our experiments, we took special attention
to some aspects that are oten overlooked in the literature, such as processing
time and the generalization of the extraction results for unseen domains. Our
approach has shown to be about an order of magnitude faster than an equivalent
full rendering alternative while retaining a good quality of extraction.
[11_MSc_miranda]
Fabio Markus Nunes MIRANDA.
Volume rendering of unstructured hexahedral meshes. [Title
in Portuguese: Renderização volumétrica de malha não estruturada de hexaedros].
M.Sc. Diss. Port. Presentation: 05/09/11. 47 p. Advisor: Waldemar Celles
Filho.
Abstract: Important engineering applications use unstructured
hexahedral meshes for numerical simulations. Hexahedral cells, when compared to
tetrahedral ones, tend to be more numerically stable and to require less mesh
refinement.
However, volume visualization of unstructured hexahedral meshes is challenging
due to the trilinear variation of scalar fields inside the cells. The
conventional solution consists in subdividing each hexahedral cell into five or
six tetrahedra, approximating a trilinear variation by an inadequate piecewise
linear function. This results in inaccurate images and increases the memory
consumption. In this thesis, we present an accurate ray-casting volume rendering
algorithm for unstructured hexahedral meshes. In order to capture the trilinear
variation along the ray, we propose the use of quadrature integration. We also
propose a fast approach that better approximates the trilinear variation to a
series of linear ones, considering the points of minimum and maximum of the
scalar function along the ray. A set of computational experiments demonstrates
that our proposal produces accurate
results, with reduced memory footprint. The entire algorithm is implemented on
graphics cards, ensuring competitive performance.
[11_MSc_pina]
Felipe Freixo PINA.
Aplicações de DHT em sistemas de computação distribuída. [Title
in English: Utilization of DHT in distributed computing systems].
M.Sc. Diss. Port. Presentation: 29/08/11. 63 p. Advisor: Noemi de La Rocque
Rodriguez.
Abstract: P2P architectures are recognized for
decentralization and incentive for the cooperation among nodes. These
characteristics allow for fault tolerance and resource distribution among the
nodes (by replication) to systems based on the P2P architecture. Systems based
in P2P networks built using the DHT technique are scalable. Since this
architecture is commonly used in content distribution systems, in this work we
investigate the utilization of the DHT technique in distributed computing
systems, where the shared
resources are the node’s computational power. Four routing protocols were
analyzed to identify the most appropriated for use in distributed computing
systems and applied the group concept to archive fault tolerance and resource
distribution among nodes.
[11_PhD_viana]
Francisco Henrique de Freitas VIANA.
Modelos e algoritmos para o team
orienteering problem. [Title
in English: Models and algorithms to the team orienteering problem].
Ph.D. Thesis. Port. Presentation: 12/09/11. 99 p. Advisors: Marcus Vinicius
Soledade Poggi de Aragão and Eduardo Uchoa Barboza.
Abstract: Team
Orienteering Problem is a vehicle routing problem on a graph with durations
associated to the arcs and profits assigned to visiting the vertices. In this
problem, a fleet with a fixed number of identical vehicles performs
the visitations and there is a limited total duration for the routes to be ended
up. Each vertex can be visited at most once and the solution does not have the
obligation to visit all vertices, due to the constraint that limits the maximum
duration of routes. The goal of the problem is to maximize
the total profit gathered by all routes. In this work, two approaches have been
proposed: an exact and a heuristic one. In the exact approach, we have developed
an arc based formulation and an extended formulation where each arc has an extra
index. This index represents the departure time of a vehicle using an arc.
Through transformations on the extended formulation, we have obtained a
formulation, whose relaxation - the restricted master problem - is solved using
the column generation technique. A dynamic programming algorithm solves the
column generation subproblem in pseudo-polynomial time. This algorithm generates
non-elementary routes that allow subcycles. In order to cut off the subcycles, a
new class of inequalities called min cut has been proposed.We have applied a
Branch-Cut-and-Price (BCP) algorithm. This allowed finding some new upper bounds.
The exact approach has achieved competitive results compared to the best exact
algorithm has
already proposed to this problem. In the heuristic approach, besides a kopt
neighborhood, we have also exploited an ellipsoidal search that adds a new cut
constraint to the formulation of Branch-Cut-and-Price algorithm. This new cut
reduces the space search to a neighborhood around a known
set of solutions. This search is used as a crossover operator that runs all
iterations of a evolutive algorithm. This approach converges in a reasonable
computational time and finds optimal or near optimal solutions for some
instances in the literature.
[11_PhD_abraham]
Frederico Rodrigues ABRAHAM.
Visualização de modelos massivos de reservatórios
naturais de petróleo. [Title
in English: Visualization of complex natural black oil reservoir models].
Ph.D. Thesis. Port. Presentation: 12/09/11. 122 p. Advisor: Waldemar Celles
Filho.
Abstract: Recent advances in parallel architectures for the
numerical simulation of natural black oil reservoirs have allowed the simulation
of very discretized domains. As a consequence, these simulations produce an
unprecedented
volume of data, which must be visualized in 3D environments for carefull
analysis and inspection of the model. Conventional scientific visualization
techniques of such very large models are not viable, creating a demand for the
development of scalable visualization solutions. The need for the visualization
of such complex data introduces several computational issues
which must be addressed in order to achieve interactive rendering rates, such as
the impossibility of storing the entire data in main memory. There are two main
research areas which propose solutions for the visualization of models with such
magnitude: distributed rendering and multi-resolution techniques. This work
proposes solutions for the visualization of massively complex reservoir models
in each of these research areas, and a discussion over the advantages and
limitations of each solution is made. In the first part of the work, we propose
a distributed system based on a sort-last approach for the rendering of such
models in PC clusters, where each PC is equipped with multiple GPUs. Given an
efficient use of the available GPUs, combined with a pipelined implementation
and the use of partial image compositions on the cluster nodes, our proposal
tackles the scalability issues that arise when using mid-to-large GPU clusters.
The second part of the work brings the proposal of a hierarchical
multi-resolution structure of black oil reservoir meshes, with a new
simplification algorithm designed specifically for such meshes. The hierarchical
structure brings some new approaches in relation to related work, doing a much
less conservative projected error estimation. We propose a minimum refresh rate
guarantee strategy for our multiresolution rendering, which is the main goal for
such systems. Afterwards, we introduce a proposal for the rendering of data
associated with the original reservoir mesh mapped over the simplified meshes,
such as the original model grid wireframe and reservoir properties. This
proposal guarantees an independence between the multi-resolution structure and
the properties generated by a simulation, which guarantees the reuse of the
structure among several simulations of the same model. Experimental results
demonstrate the effectiveness of the proposed solutions.
[11_PhD_faustino]
Geisa Martins FAUSTINO.
Um método baseado em
mineração de grafos para segmentação e contagem de clusters de máximos locais em
imagens digitais. [Title
in English: A graph-mining based method for segmentation and counting of local
maximum clusters in digital images.
Ph.D. Thesis. Port. Presentation: 08/04/11. 79 p. Advisors: Marcelo Gattass
and Carlos José Pereira de Lucena.
Abstract: A grayscale image can be viewed as a topological surface and this
way, objects of interests may appear as peaks (sharp mountains), domes (smooth
hills) or valleys (V- or U-shaped). Generally, the dome top presents more than
one local maximum. Thus, it can be characterized by a local maximum cluster.
Segmenting objects individually in images where they appear partially or totally
fused is a problem which frequently may not be solved by a watershed
segmentation or a basic morphological processing of images. Other issue is
counting similar objects in images segmented beforehand. Counting them manually
is a tedious and time-consuming task, and its subjective nature can lead to a
wide variation in the results. This work presents a new method for segmenting
and counting of local maximum clusters in digital images through a graph-based
approach. Using the luminance information, the image is represented by a region
adjacency graph and a graph-mining algorithm is applied to segment the clusters.
Finally, according to image characteristics, a graph-clustering algorithm can be
added to the process to improve the final result. The object counting step is a
direct result from the mining algorithm and the clustering algorithm, when the
latter is applied. The proposed method is tolerant to variations in object size
and shape and can easily be parameterized to handle different image groups
resulting from distinct objects. Tests made on a database with 262 images,
composed of photographs of objects (group 1) and embryonic stem cells under
fluorescence microscopy images (group 2), attest the effectiveness and quality
of the proposed method as for segmentation and counting purpose. The images form
group 1 processed by our method were checked by the author and those ones from
group 2 by the specialists from the Institute of Biomedical Sciences at UFRJ.
For these images we obtained an average F-measure of 85.33% and 90.88%,
respectively. Finally, a comparative study with the widely used watershed
algorithm was done. The watershed achieved an average F-measure of 74.02% e
78.28% for groups 1 and 2, respectively, against 85.33% e 91.60% obtained by our
method.
[11_MSc_lima]
Guilherme Augusto Ferreira LIMA.
Eliminando
redundâncias no perfil NCL EDTV. [Title
in English: Eliminating redundancies from NCL EDTV Profile].
M.Sc. Diss. Port. Presentation: 10/06/11. 70 p. Advisor: Luiz Fernando
Gomes Soares.
Abstract: The implementation of a NCL presentation engine,
or formatter, is a complex task. This complexity is mainly due to the semantic
distance between NCL documents, high-level declarative specifications, and the
API used by the formatter to present them, in most cases low-level and
imperative. The greater the distance, the greater is the complexity of this
mapping and, consequently, of its implementation, which is more likely to become
ineficient and bug-prone. This work presents a new NCL profile, called NCL Raw,
which eliminates most of the redundancies present in EDTV | the main profile of
NCL 3.0 | and, in a certain way, reduces the distance between the documents and
the machine. Raw profile captures only EDTV s essential concepts, which in turn
can be used to simulate the whole language defined by EDTV itself. In other
words, we can use the Raw profile as a simpler intermediate language to which
EDTV documents can be converted before being presented. This dissertation
discusses alternative architectures for NCL converters and presents the
implementation of a document converter (from EDTV to Raw).
[11_MSc_ferreira]
Guilherme Carlos De Napoli Ferreira.
A machine learning approach for Portuguese text chunking. [Title
in Portuguese: Uma abordagem de aprendizado de máquina para segmentação textual
no português].
M.Sc. Diss. Port. Presentation: 13/04/11. 63 p. Advisor: Ruy Luiz
Milidiú.
Abstract: Text chunking is a very relevant Natural Language Processing task,
and consists in dividing a sentence into disjoint sequences of syntactically
correlated words. One of the factors that highly contribute to its importance is
that its results are used as a significant input to more complex linguistic
problems. Among those problems we have full parsing, clause identification,
dependency
parsing, semantic role labeling and machine translation. In particular, Machine
Learning approaches to these tasks greatly benefit from the use of a chunk
feature. A respectable number of effective chunk extraction strategies for the
English language has been presented during the last few years. However, as far
as we know, no comprehensive study has been done on text chunking for
Portuguese, showing its benefits. The scope of this work is the Portuguese
language, and its objective is twofold. First, we analyze the impact of
different chunk definitions, using a heuristic to generate chunks that relies on
previous full parsing annotation. Then, we propose Machine Learning models for
chunk extraction based on the Entropy Guided Transformation Learning technique.
We employ the Bosque corpus, from the Floresta Sintá(c)tica project, for our
experiments. Using golden values determined by our heuristic, a chunk feature
improves the F!=1 score of a clause identification system for Portuguese by 6.85
and the accuracy of a dependency parsing system by 1.54. Moreover, our best
chunk extractor achieves a F!=1 of 87.95 when automatic part-of-speech tags are
applied. The empirical findings indicate that, indeed, chunk information derived
by our heuristic is relevant to more elaborate tasks targeted on Portuguese.
Furthermore, the effectiveness of our extractors is comparable to the
state-of-the-art similars for English, taking into account that our proposed
models are reasonably simple.
[11_MSc_nunes]
Gustavo Bastos NUNES.
Explorando aplicações que usam a geração de vértices em GPU. [Title
in English: Exploring applications that use vertex generation on GPU].
M.Sc. Diss. Port. Presentation: 16/08/11. 111 p. Advisors: Alberto
Barbosa Raposo and Bruno Feijó.
Abstract: One of the main bottlenecks in the graphics pipeline nowadays is
the memory bandwidth available between the CPU and the GPU. To avoid this
bottleneck, programmable features were inserted into the video cards. With the
Geometry Shader launch it is possible to create vertices in the GPU, however,
this pipeline stage has a low performance. With the new graphic APIs (DirectX11
and OpenGL4) a Tessellator stage that allows massive vertex generation inside
the GPU was created. This dissertation studies this new pipeline stage, as well
as presents classic algorithms (PN-Triangles and Phong Tessellation) that were
originally designed for CPU and proposes new algorithms (Tubes and Terrain
rendering in the GPU) that takes advantage of this new paradigm.
[11_MSc_roenick]
Hugo Roenick.
Um modelo de componentes de software
com suporte a múltiplas versões. [Title
in English: A software component model with support for multiple versions].
M.Sc. Diss. Port. Presentation: 04/03/11. 66 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: Several software component models for distributed systems have
been proposed by the industry and academy, such as Fractal, CCM, COM, OpenCOM,
LuaCCM, and SCS. One of the greatest advantages of component-based development
is the better support for independent extensibility. However, managing multiple
versions of components is still a challenge, especially when it’s not possible
to update all the system’s components at the same time. Because of that,
different versions of the same component interface shall be required to coexist
in the same system. In this work, we try to identify the key points to support
multiple versions of component interfaces and propose a model that offers this
support. To evaluate the proposed model, we will extend the SCS component system
to support it. Based on the evolution historic of different SCS components used
in a real application, we will conduct experiments with the new version of SCS
to verify the effectiveness of the proposed model.
[11_MSc_monteiro]
Ingrid
Teixeira MONTEIRO.
Acessibilidade por diálogos de mediação: desenvolvimento e
avaliação de um assistente de navegação para a Web. [Title
in English: Accessibility by mediation dialogues: development and evaluation of
a Web navigation helper].
M.Sc. Diss. Port. Presentation: 16/03/10. 198 p. Advisor: Clarisse Sieckenius de
Souza.
Abstract: Web accessibility is one of the big challenges in the Computer
Science research. There are many initiatives that aim to improve systems, in
order that users with disabilities and other special needs have plain access to
information and service available on the Internet. One of them is the system
presented here, WNH, a Web Navegation Helper, created to support users with
special needs do activities on the Internet, through mediation dialogs,
previously development, with a specialized editor, by users interested in
helping these people. We present in the text, the description of the developed
tools (the editor and the helper) and the analysis of three exploratory studies
done, before and after the system development. We also show how the experiments
revealed social and cultural aspects of brazilian society, that are relevant to
the WNH design, and we show how they changed our initial vision of the system.
There was necessary to rethink the tool, in order to take account on the
cultural variable in its development and evaluation.
[11_MSc_vieira]
Jerônimo Sirotheau de Almeida EICHLER.
Uma arquitetura para inferência de
atividades de usuário de computação móvel.
[Title
in English: An architecture for inference of activities of mobile computing
users]. M.Sc. Diss. Port. Presentation: 12/04/11 67 p. Advisor: Karin
Koogan Breitman.
Abstract: The ubiquitous computing combined with the advance of sensor
technology creates a scenario in which the integration of several computing
resources is used to keep a set of services and features available to the user
whenever necessary. A particular trend in this area is the activity based
systems, i.e., systems that are aware of the activity played by the user. In
these systems, inference engine is essential to recognize user’s actions and
allow the systems to adapt its behavior according to user’s actions. Though, the
development of this type of systems is not a trivial task as the high rate of
information exchanged brings challenges related to privacy, performance and
information management. In this work we propose an architecture for activity
inference systems. To achieve this goal, we define a set of components that
perform important roles in the inference process. Finally, to show the
feasibility of this approach, we designed, implemented and evaluated a system
based on the proposed architecture.
[11_MSc_vieira]
José Geraldo de Sousa JUNIOR.
Uma arquitetura para aplicações dinâmicas NCL
baseadas em famílias de documentos. [Title in English: An architecture for
dynamic NCL applications based on document families].
M.Sc. Diss. Port. Presentation: 09/11/11. 72 p. Advisor:
Luiz Fernando Gomes Soares.
Abstract: The presentation of dynamic hypermedia applications may be seen as
a recursive authoring process, in which applications are recreated during
presentation time, whenever content changes are triggered by interactions
between the presentation engine and other entities such as users, imperative
objects, external applications, etc. In some scenarios of dynamic hypermedia
applications, it is possible to identify a hypermedia composition pattern that
remains consistent even after the document is recreated. This kind of
applications is common, for instance, in an Interactive Digital Television
environment. The presence of such a pattern throughout the presentation of an
application for Interactive Digital Television allows the establishment of an
analogy between recreating documents dynamically and authoring applications
through a template-driven authoring method. Using the latter, the authoring
process is conducted by “filling” gaps left by a template that represents the
hypermedia composition pattern of an application. Analogously, in the dynamic
document re-creation, the module that processes document updates fulfills the
role of “filling” the templates gaps. The main goal of the present work is to
define an architecture, inspired by this analogy, to structure NCL applications
that can be dynamically refactored and that remain conform to their respective
templates. Nested Context Language (NCL) is the language of Brazilian System of
Digital Terrestrial Television applications. In order to validate the proposal,
an application that captures a real scenario and an authoring tool for specify
graphically document filling was developed.
[11_MSc_vieira]
Lourival Pereira VIEIRA NETO.
Lunatik: scripting de kernel de sistema
operacional com Lua. [Title in English: Operating system kernel scripting with
Lua] M.Sc. Diss. Port. Presentation: 12/04/11 69 p. Advisor: Roberto
Ierusalimschy.
Abstract: There is a design approach to improve operating system flexibility,
called extensible operating system, that supports that operating systems must
allow extensions in order to meet new requirements. There is also a design
approach in application development that supports that complex systems should
allow users to write scripts in order to let them make their own configuration
decisions at run-time. Following these two design approaches, we have built an
infrastructure that allows users to dynamically load and run Lua scripts into
operating system kernels, improving their flexibility. In this thesis we present
Lunatik, our scripting subsystem based on lua, and show a real usage scenario in
dynamically scaling CPU frequency and voltage. Lunatik is currently implemented
both for NetBSD and Linux.
[11_PhD_salgado]
Luciana Cardoso de Castro SALGADO.
Cultural viewpoint metaphors to explore and
communicate cultural perspectives in cross-cultural HCI design. [Title
in Portuguese: Metáforas de perspectivas culturais para exploração e comunicação
da diversidade cultural no design de IHC] Ph.D. Thesis. Eng. Presentation: 04/04/11 228 p. Advisor:
Clarisse Sieckenius de Souza.
Abstract: More than ever before, today one of the challenges for interaction
design is the development of systems aiming to attend to the needs and
expectations of people with different cultural and social backgrounds. The most
widely used perspective in cross-cultural design is
internationalization-localization. Internationalization is the process of
separating the core functionality code from system's interface specifics (e.g.
text language, measures, etc.). With localization, the interface is customized
for a particular audience (through) language translation, cultural markers and
even technical features, for instance). The result of internationalization and
localization is to conceal or neutralize cultural differences
among different user communities and contexts of use. We are, however,
interested in another situatuin: one where the design intent is virtually the
opposite: to expose and explore cultural diversity. This is the case, for
instance, when the purpose of the designed system is to simulate users to make
contact with a foreign culture. This thesis provides new knowledge to help HCI
designers communicate their intent when they want to promote the users' contact
with cultural diversity. We present five cultural viewpoint metaphors (CVM) to
support reasoning and decision-making about intercultural experience dimensions.
The metaphors derive from empirical studies applying Semiotic Engineering to
analyze and re-design cross-cultural systems interfaces. In order to investigate
if and how CVM actually support HCI professional/practitioners at design and
evaluation time, we carried out an extensive case study to assess how CVM can be
used in design and evaluation activities. We found that CVM played an important
role in early design stages, helping designers to reason effectively about
intercultural experiences while determining which cultural perspective they want
to adopt. Furthermore, CVM features provided a rich epistemic grid where to
consistency of design choices stands out more clearly.
[11_MSc_gomes]
Luciana da Silva
Almendra GOMES.
Proveniência para workflows de Bioinformática. [Title
in English: Provenance for Bioinformatics workflows].
M.Sc. Diss. Port. Presentation: 27/04/11. 104 p. Advisor: Edward Hermann
Haeusler.
Abstract: Many scientific experiments are designed as computational
workflows, which can be implemented using traditional programming languages. In
the Bioinformatics domain ad-hoc scripts are often used to build workflows.
Scientific Workflow Management Systems (SWMS) have emerged as an alternative to
those scripts. One particular SWMS feature that has received much attention by
the scientific community is the automatic capture of provenance data. These
allow users to track which resources and parameters were used to obtain the
results, among many other required information to validate and publish an
experiment. In the present work we have elicited some data provenance challenges
in the SWMS context, such as (i) the heterogeneity of data representation
schemes that hinders the understanding and interoperability; (ii) the storage of
consumed and produced data and (iii) the reproducibility of a specific execution.
These challenges have motivated the proposal of a data provenance conceptual
scheme for workflow representation. We have implemented an extension of a
particular SWMS system (Bioside) to include provenance data and store them using
the proposed conceptual scheme. We have focused on some requirements commonly
found in bioinformatics workflows.
[11_PhD_valente]
Luis Paulo
Santos VALENTE.
A methodology for conceptual design of pervasive mobile games. [Title
in Portuguese: Um modelo de domínio para jogos pervasivos móveis].
Ph.D. Thesis. Eng. Presentation: 09/09/11. 242 p. Advisor: Bruno Feijó.
Abstract: Pervasiveness can be recognized in game playing every time the
boundaries of playing expand from the virtual (or fictional) world to the real
world. Sensor technologies, mobile devices, networking capabilities, and the
internet make pervasive games possible. In the present work, we consider “pervasive
mobile games” as context-aware games that necessarily use mobile devices. Also
we consider that smartphones are the main driver to fulfill the promises of
pervasive game playing. As far as we are aware, this is the first general work
on pervasive mobile game design. This thesis proposes a methodology to support
the conceptual design stage of pervasive mobile games. The main contributions of
this research work are twofold: [1] A novel list of prominent features of
pervasive games, identified from game projects and the literature, and
checklists for each feature. This feature list (and corresponding checklists)
can be used to spark novel game ideas, and to help in discovering functional and
non-functional requirements for pervasive mobile games. [2] A domain specific
language to help in specifying activities in pervasive mobile games that use
mobile devices, sensors, and actuators as the main interface elements.With the
proposed methodology, designers can discuss, identify, verify, and apply
important features of pervasive mobile games. Also, due to the “lightweight”
nature of the methodology, designers can easily catch the “big picture” of the
games by keeping focused on the intents of the game activities, and not getting
lost in the source code.
[11_MSc_belchior]
Mairon de Araújo BELCHIOR.
Modelo de controle de acesso no projeto de aplicações
na Web semântica. [Title in English: An access control model for the design of
semantic Web applications]. M.Sc. Diss. Port.
Presentation: 10/10/2011. 110 p. Advisor: Daniel Schwabe
Abstract: The Role-based Access Control (RBAC) model provides a way to
manage access to information of an organization, while reducing the complexity
and cost of security administration in large networked applications. Currently,
several design method of Semantic Web (and Web in general) applications was
proposed, but none of these methods produces an specialize and integrated model
for describing access control policies. The goal of this dissertation is to
integrate the access control in design method of
Semantic Web applications. More specifically, this work presents an extension of
SHDM method (Semantic Hypermedia Design Method) in order to include RBAC model
and an rule based policy Model integrated with the other models of this method.
SHDM is a model-driven approach to design web applications for the semantic web.
A modular software architecture was proposed and implemented in Synth, which is
an application development environment according to SHDM method.
[11_MSc_azambuja]
Marcello de
Lima AZAMBUJA.
A cloud computing architecture for large scale video data
processing. [Title
in Portuguese: Uma arquitetura em nuvem para processamento de video em larga
escala].
M.Sc Diss. Eng. Presentation: 31/08/11. 61 p. Advisor: Karin Koogan Breitman
Abstract: The advent of the Internet poses great challenges to the design of
public submission systems as it eliminates traditional barriers, such as
geographical location and cost. With open global access, it is very hard to
estimate storage space and processing power required by this class of
applications. In this thesis we
explore cloud computing technology as an alternative solution.
The main contribution of this work is a general architecture in which to built
open access, data intensive, public submission systems. A real world scenario is
analyzed using this architecture for video processing.
[11_PhD_metello]
Marcelo Gomes
METELLO.
Process-oriented modeling and simulation for serious games. [Title
in Portuguese: Modelagem e simulação orientadas a processos para jogos sérios].
Ph.D Thesis. Eng. Presentation: 31/08/11. 163 p. Advisor: Marco Antonio
Casanova.
Abstract: This thesis focuses on serious games that simulate realistic
situations. The objectives of such games go beyond mere entertainment to fields
such as training, for example. Since other areas of Computer Science provide
methods and tools for simulating and reasoning about real situations, it is
highly desirable to use them in this kind of serious games. This thesis
introduces a new framework on which simulation techniques from different areas,
such as modeling and simulation, geographic information systems and multi-agent
systems, can be integrated into a serious game architecture. The proposed
solution resulted in the conception of a novel simulation modeling paradigm,
named process-oriented simulation (POS), which combines different aspects of the
more traditional object-oriented simulation (OOS) and agent-oriented simulation
(AOS) paradigms. The main idea of POS is the separation between state and
behavior of the entities involved in the simulation. This characteristic favours
the modularization of complex behaviors and the integration of different and
interfering simulation models into a single simulation.
Based on the POS paradigm, a discrete-event simulation formalism named
Process-DEVS was developed as an extension of the well-known DEVS simulation
formalism. Some formalisms, such as workflows and cell space processes, were
mapped to Process-DEVS and tested in the implementation of two systems: an
emergency training game and a contingency planning system, both designed for the
oil and gas industry.
[11_MSc_pessoa]
Marcos Borges
PESSOA.
Geração e execução automática de scripts de teste para aplicações
web a partir de casos de uso direcionados por comportamento. [Title
in English: Automatic generation and execution of test scripts for web
applications from use case driven by behavior].
M.Sc Diss. Port. Presentation: 30/08/11. 99 p. Advisor: Arndt von Staa.
Abstract: This work aims at exploring the software requirements, described
in the form of use cases, as an instrument to support the automatic generation
and execution of functional tests, in order to automatically check if the
results obtained in the tests generated and executed are in accordance with
specified. The establishes a process and a tool for documenting use cases and
automatically generating and executing test scripts that verify the behavior of
web applications. The content of the use case, especially the flow of events (main
and alternative), is structured in accordance to a "behavior model" that stores
the test data and generates input for a browser testing tool. In this work, we
have used the Selenium tool to automate the interaction with the browser. The
assessment of our approach involved the application of the process and
generating tool in real systems, comparing the results with other techniques
applied in the same systems.
[11_MSc_salcedo]
Matheus SALCEDO.
Gestão do conhecimento no gerenciamento de serviços de TI: uso
e avaliação. [Title
in English: Knowledge management for IT service management: usage and
evaluation].
Ms.C. Diss. Port. Presentation: 11/01/11. 123 p. Advisor: Daniel Schwabe.
Abstract:
In today's economy, the competitive advantage of companies can be directly
linked to its ability to utilize the knowledge held by its members. However, to
add value to an organization, this knowledge must be shared. Thus, the
organization's ability to integrate and apply the expertise of its workforce is
essential to achieve and maintain an organizational competitive advantage. The
knowledge management systems help to create, collect, organize and disseminate
knowledge. However, these systems have limitations, such as the difficulty of
integrating knowledge from different sources, usually because of the lack of
semantics in his descriptions. The main objective of this dissertation is to
study the technological limitations of existing knowledge management systems and
propose solutions through the adoption of Semantic Web formalisms. To achieve
this goal, is used as a study case the knowledge management system in production
of a large Brazilian company, which supports the operation of its IT
infrastructure. This study demonstrates that approach can add additional
semantics to existing data, integrating previously isolated databases, creating
a better operating result.
[11_MSc_bomfim]
Mauricio Henrique de Souza BOMFIM.
Um método e um ambiente para o
desenvolvimento de aplicações na Web Semântica. [Title
in English: A method and environment for Semantic Web application development].
M.Sc. Diss. Port. Presentation: 27/01/11. 196 p. Advisor: Daniel Schwabe.
Abstract:
The growing availability of data and ontologies according to the Semantic Web
standards has led to the need of methods and tools for applications development
that take account the use and availability of the data distributed according to
these standars. The goal of this dissertation is to present a method, including
processes and models, and an environment for the Semantic Web applications
development. More specifically, this work shows the evolution of the SHDM (Semantic
Hypermedia Design Method), which is a method for the Semantic Web hypermedia
application development and the Synth, which is an environment to build
applications designed according to the SHDM.
[11_PhD_serrano]
Maurício SERRANO.
Desenvolvimento intencional de software transparente baseado
em argumentação. [Title
in English: Intentional development of transparent software based on
argumentation].
Ph.D. Thesis. Port. Presentation: 23/08/11. 283 p. Advisor: Julio Cesar
Sampaio do Prado Leite.
Abstract: Transparency is a critical quality criterion to modern democratic
societies. As software permeates society, transparency has become a concern to
public domain software, as eGovernment, eCommerce or social software. Therefore,
software transparency is becoming a quality criterion that demands more
attention from software developers. In particular, transparency requirements of
a software system are related to non-functional requirements, e.g. availability,
usability, informativeness, understandability and auditability. However,
transparency
requirements are particularly difficult to validate due to the subjective nature
of
the involved concepts. This thesis proposes a transparency-requirements-driven
intentional development of transparent software. Transparency requirements are
elicited with the support of a requirements patterns catalog, relatively
validated by
the stakeholders through argumentation and represented on intentional models.
Intentional models are fundamental to software transparency, as they associate
goals and quality criteria expected by the stakeholders with the software
requirements. The goals and quality criteria also justify the decisions made
during
software development. A system was implemented as an intentional multi-agents
system, i.e., a system with collaborative agents that implement the
Belief-Desire-Intention model and that are capable of reasoning about goals and quality
criteria.
This thesis discusses important questions to the success of our approach to the
development of transparent software, such as: (i) forward and backward
traceability; (ii) a fuzzy-logic based reasoning engine for intentional agents;
(iii)
the application of an argumentation framework to relatively validate
transparency
requirements through stakeholders’ multi-party agreement; and (iv) collaborative
pre-traceability for intentional models based on social interactions. Our ideas
were
validated through case studies from different domains, such as ubiquitous
computing and Web applications.
[11_PhD_serrano]
Milene SERRANO.
Reuse-oriented approach for incremental and systematic
development of intentional ubiquitous applications. [Title
in English: Abordagem orientada à reutilização de software para desenvolvimento
incremental e sistemático de aplicações ubíquas intencionais].
Ph.D. Thesis. Eng. Presentation: 31/03/11. 228 p. Advisor: Carlos José
Pereira de Lucena.
Abstract: Ubiquitous applications are embedded in
itelligent environments integrated into the physical world and composed of users
with different preferences, heterogeneous devices and several content and
service providers. Moreover, they focus on offering services and contents
anywhere and at any time by assisting the users in their daily activities
without disturbing them. Based on this ideliazed world, the "anywhere and at any
time" paradigm poses some challenges for the Software Engineering community,
such as: device heterogeneity, distributed environments, mobility, user
satisfaction, content adaptability, context awareness, privacy, personalization,
transparency, invisibility and constant evolution of technological trends. In
order to deal with these new technological challenges, we propose a
Reuse-Oriented Approach for Incremental and Systematic Development of
Intentional Ubiquitous Applications. We have chosen two main goals that drive
our research in this thesis: (i) the construction of reuse-oriented support sets
based on an extensive investigation of ubiquitous applications and the
Intentional-Multi-Agent Systems paradigm - i.e. Development for Reuse; and (ii)
the incremental and systematic development of
International-Multi-Agent-Systems-driven ubiquitous applications based on the
reuse-oriented approach - i.e. Development with Reuse. Some contributions of our
work are: (i) a reuse-oriented architecture centered on support sets - i.e.
building blocks mainly composed of conceptual models, frameworks, patterns and
libraries - obtained from the Domain Engineering of Ubiquitous Applications;
(ii) a reuse-oriented Ubiquitous Application Engineering for incremental and
systematic development of intentional ubiquitous applications centered on the
proposed building blocks; (iii) a reasoning engine focused on fuzzy conditional
rules and the Belief-Desire-Intention model to improve the agents' cognitive
capacity; (iv) a specific mechanism based on intentional agents to deal with
privacy issues by balancing privacy and personalization as well as transparency
and invisibility; (v) a catalogue that graphically presents the main ubiquitous
non-functional requirements, their interdependencies and ways to opreationalize
them based on the combination of thaditional and emergent technologies; (vi)
ontologies to allow the dynamic construction of interfaces and to improve the
communication and inter-operability of software agents; and (vii) a dynamic
database model to store and retrive the ubiquitous profiles (e.g. user, device,
network and contract profiles) by improving the data management "on the fly".
The proposed approach was evaluated by developing different ubiquitous
applications (e.g. e-commerce
and dental clinic ubiquitous applications).
[11_MSc_rosa]
Otávio Araújo Leitão ROSA.
Test-driven maintenance: uma abordagem para a
manutenção de sistemas legados. [Title
in English: Test-driven Maintenance: an approach for the maintenance of legacy
systems].
M.Sc. Diss. Port. Presentation: 08/04/11. 98 p. Advisor: Arndt von Staa.
Abstract:
Test-Driven Development is a software development technique based on quick
cycles that switch between wrinting test and implementing a solution that
assures that tests do pass. Test-Driven Development has produced excellent
result in various aspects of building new software systems. Incresead
maintainability, improved design, reduced defect density, better documentation
and increased code test coverage are reported as advantages that contribute to
reducing the cost of development and, consequently, to increasing return on
investiment. All these benefits have a contributed for Test-driven Development
to become an increasingly relevant practice while developing software. When
evaluating test driven development from the perspective of maintaining legacy
systems, we face a clear mismatch when trying to adopt this technique.
test-Driven Development is based on the premise that tests should be written
before coding, but when working with legacy code we already have thousands of
lines written and running. considering this context, we discuss in this
dissertation a technique, which we call Test-Driven Maintenance, that is a
result of adapting Test-Driven Development to the needs of maintaining legacy
systems. We describe how we have performed the adaption that lead us to this new
technique. Afterwards, we evaluate the efficacy of the technique applying it to
a realistic project. To obtain realistic evaluation results, we have performed
an empirical study while introducing the technique in a maintenance team working
on a legacy system that is in constant evolution and use by an enterprise in Rio
de Janeiro.
[11_MSc_vilela]
Paula de Castro Sonnenfeld VILELA.
Classificação de sentimento para notícias sobre a Petrobras no mercado
financeiro. [Title
in English: Sentiment analysis for financial news about Petrobras company].
M.Sc. Diss. Port. Presentation: 01/07/11. 50 p. Advisor: Ruy Luiz Milidiú.
Abstract: A huge amount of information is available online, in particular
regarding financial news. Current research indicate that stock news have a
strong correlation to market variables such as trade volumes, volatility, stock
prices and firm earnings. Here, we investigate a Sentiment Analysis problem for
financial news. Our goal is to classify financial news as favorable or
unfavorable
to Petrobras, an oil and gas company with stocks in the Stock Exchange market.
We explore Natural Language Processing techniques in a way to improve the
sentiment classification accuracy of a classical bag of words approach. We
filter on topic phrases for each Petrobras related news and build syntactic and
stylistic input features. For sentiment classification, Support Vector Machines
algorithm is used. Moreover we apply four feature selection methods and build a
committee of SVM models. Additionally, we introduce Petronews, a Portuguese
financial news annotated corpus about Petrobras. It is composed by a collection
of one thousand and fifty online financial news from 06/02/2006 to 01/29/2010.
Our experiments indicate that our method is 5.29% better than a standard
bag-of-words approach, reaching 87.14%
accuracy rate for this domain.
[11_MSc_silveira]
Paulo da Silva SILVEIRA.
Projeto e implementação de interfaces coletivas em um middleware orientado a componentes de software. [Title
in English: Design and implementation of collective interfaces in a
component-oriented middleware].
M.Sc. Diss. Port. Presentation: 08/04/11. 93 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract:
Traditionally, the development process of parallel systems emphasizes
performance at the expense of better programming abstractions, which causes
problems such as excessive code complexity and reduced software maintain-ability.
New techniques have show expressive results in building parallel software, such
as software components technologies. This work conducted a study of the
mechanism of parallel communication between compnents known as Collective
Interfaces. As part of this study, we performed an implementation of this
mechanism in the SCS middleware, where two connectors were designed and
implemented for parallel synchronization and communication. This implementation
allowed us to analyze the requirements for the integration of Collective
Interfaces in a component oriented middleware and to identify the challenges of
implementing this mechanism in a language as C++, widely used in scientific
applications.
[11_MSc_silva]
Paulo de Tarso Gomide Castro SILVA.
A system for stock market forecasting and
simulation. [Title
in Portuguese: Um sistema para predição e simulação do mercado de capitais].
M.Sc. Diss. Port. Presentation: 25/03/11. 63 p. Advisor: Ruy Luiz Milidiú.
Abstract:
he interest of both investors and researchers in stock market behavior
forecasting has increased throughout the recent years. Despite the wide number
of publications examining this problem, accurately predicting future stock
trends and developing business strategies capable of turning good predictions
into profits are still great challenges. This is partly due to the nonlinearity
and noise inherent to the stock market data source, and partly because
benchmarking systems to assess the forecasting quality are not publicly
available. Here, we perform time series forecasting aiming to guide the investor
both into Pairs Trading and buy and sell operations. Furthermore, we explore two
different forecasting periodicities. First, an interday forecast, which
considers only daily data and whose goal is predict values referring to the
current day. And second, the intraday approach, which aims to predict values
referring to each trading hour of the current day and also takes advantage of
the intraday already known at prediction time. In both forecasting schemes, we
use three regression tools as predictor algorithms, which are: partial Least
Squares Regression, Support Vector Regression and Artificial Neural Networks. We
also propose a trading system as a better way to assess the forecasting quality.
In the experiments, we examine assets of the most traded companies in the BM&FBOVESPA
Stock Exchange, the world's third largest and official Brazilian Stock Exchange.
The results for the three predictors are presented and compared to four
benchmarks, as well as to the optimal solution. The difference in the
forecasting quality, when considering either the forecasting error metrics or
the trading system metrics, is remarkable. If we consider just the mean absolute
percentage error, the proposed predictors do not show a significant superiority.
Nevertheless, when considering the trading system evaluation, it shows really
outstanding results. The yield in some cases amounts to an annual return on
investment of more than 300%.
[11_MSc_rodrigues]
Paulo Gallotti RODRIGUES.
v-Glove: uma proposta de dispositivo de interação para
aplicações imersivas de realidade virtual. [Title
in English: v-Glove: proposing an interaction device for immersive virtual
reality applications].
M.Sc. Diss. Port. Presentation: 01/04/11. 102 p. Advisor: Alberto Barbosa
Raposo.
Abstract: Traditional interaction devices such as mouse and
keyboard don't adapt to immersive applications, since their use in this kind of
environment isn't ergonomic, because the user may be standing or in movement.
Moreover, in the current interaction model for this kind of application (based
on wands and 3D mice), the users have to change context every time they need to
execute a non-immersive task, specially the symbolic input. These constant
context changes from immersion to WIMP (Windows, Icons, Menus and Pointers)
introduce a rupture in the user interaction with the application. The objective
of this work is to explore the use possibilities of a device that maps a touch
interface in a virtual reality immersive environment. We developed a glove for
interaction in 3D virtual reality immersive environments (v-Glove), which has
two main
functionalities: tracking of the position of the user's forefinger in the space
and the generation of a vibration in the fingertip when it reaches an area
mapped in the interaction space. We performed quantitative and qualitative tests
with users to evaluate v-Glove, comparing it with a 3D mouse used in immersive
environments.
[11_MSc_souza]
Paulo Roberto França de SOUZA.
Uma ferramenta para reconstrução da sequência de
interações entre componentes de um sistema distribuído. [Title
in English: A tool for rebuilding the sequence of interactions between
components of a distributed system].
M.Sc. Diss. Port. Presentation: 04/04/11. 66 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: Distributed systems often present a runtime
behavior different than what is expected by the programmer. Static analysis is
not enough to understand the runtime behavior and to diagnoses errors. This
difficulty is caused by the non-deterministic nature of distributed systems,
because of their inherent characteristics such as concurrency, communication
latency and partial failure. Therefore, it's necessary a better view of the
interactions between the system's software components in order to understand its
runtime behavior. In this work we present a tool that rebuilds the interactions
among distributed components, presents a view of distributed threads and remote
call sequences, and allows the analysis of causality relationships. Our tool
also stores the interactions over time and correlates them to the system
architecture and to performance data. The proposed tool helps the developer to
better understand scenarios involving an unexpected behavior of the system and
to restrict the scope of error analysis, making easier the search for a
solution.
[11_PhD_mottajunior]
Paulo Rogério da MOTTA JUNIOR.
Uma abstração para programação paralela: suporte
para o desenvolvimento de aplicações. [Title
in English: An abstraction for parallel programming: support for developing
applications].
Ph.D. Thesis. Port. Presentation: 11/08/11. 107 p. Advisor: Noemi de La Rocque
Rodriguez.
Abstract: The evolution of the field of programming
traditionally trades performance for more powerful abstractions that are able to
simplify the programmer’s work. It is possible to observe the effects of this
evolution on the parallel programming area. Typically parallel programming
focuses on high performance based on the procedural paradigm to achieve the
highest possible throughput, but determining the point in which one should trade
performance
for more powerful abstractions remains an open problem. With the advent of new
system level tools and libraries that deliver greater performance without
programmer’s intervention, the belief that the application programmer should
optimize communication code starts to be challenged. As the growing demand for
large scale parallel solutions becomes noticeable, problems like code complexity,
design and modeling power, maintainability, faster development,
greater reliability and reuse, are expected to take part on the decision of
which approach to use. In the present work, we investigate the use of
higher-level abstraction layers that could provide many benefits for the
parallel application developer. We argue that the use of interpreted languages
may aid the abstraction of the processor architecture providing an opportunity
to optimize the virtual machines without affecting the user’s application code.
[11_MSc_asti]
Pedro Larronda ASTI.
Anotador morfossintático para o Português-Twitter. [Title
in English: Morphosyntactic tagger for Portuguese-Twitter] M.Sc. Diss. Port. Presentation: 05/04/11. 49 p. Advisor:
Ruy Luiz Milidiú.
Abstract: In this paper we present a language processor
that solves the task of MORPHO-SYNTACTIC TAGGING of messages posted in
Portuguese on Twitter. By analyzing the messages written by Brazilian on Twitter,
it is easy to notice that new characters are introduced in the alphabet and also
that new words are added to the language. Furthermore, we note that these
messages are syntactically malformed. This precludes the use of existing
Portuguese processors in these messages, nevertheless this problem can be solved
by considering these messages as written in a new language, the
Portuguese-Twitter. Both the alphabet and the vocabulary of such idiom contain
features of Portuguese. However, the grammar is different. In order to build the
processors for this new language, we have used a supervised learning technique
known as ENTROPY GUIDED TRANSFORMATION LEARNING (ETL). Additionally, to train
ETL processors, we have built an annotated corpus of messages in
Portuguese-Twitter. We are not aware of any other taggers for the
Morphosyntactic Portuguese-Twitter task, thus we have compared our tagger to the
accuracy of state-of-art Morphosyntactic Annotation for Portuguese, which has
accuracy around 96% depending on the tag set chosen. To assess the quality of
the processor, we have used accuracy, which measures how many tokens were tagged
correctly. our experimental results show an accuracy of 90,24% for the proposed
MORPHO-SYSTATIC TAGGER. This corresponds to significant learning, since the
initial baseline system has an accuracy of only 76,58%. This finding is
consistent with the observed learning for the corresponding regular Portuguese
taggers.
[11_MSc_moura]
Pedro Nuno de Souza MOURA.
Integrando metaeurísticas com resolvedores MIP para o
capacitated vehicle routing problem. [Title
in English: Integrating metaheuristics with MIP solvers to the capacitated
vehicle routing problem].
M.Sc. Diss. Port. Presentation: 26/08/11. 68 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão.
Abstract: Since its inception, approaches to
Combinatorial Optimization were polarized between exact and heuristic methods.
Recently, however, strategies that combine both methods have been proposed for
various problems, showing promising results. In this context, the concepts of
ball and ellipsoidal neighborhood appear, which perform a search regarding one
or more reference solutions. This work studies the application of such
neighborhoods for the Capacitated Vehicle Routing Problem (CVRP), using the
Robust Branchand-Cut-and-Price algorithm. Experiments were made and its
results were analyzed.
[11_MSc_riverasalas]
Percy Enrique RIVERA SALAS.
StdTrip: an
a priori design process for publishing linked data. [Title
in Portuguese: StdTrip: um processo de projeto a priori para a publicação de "Linked
Data"].
M.Sc. Diss. Port. Presentation: 01/04/11. 73 p. Advisor: Karin Koogan
Breitman.
Abstract: Open Data is a new approach to promote
interoperability of data in the Web. It consists in the publication of
information produced, archived and distributed by organizations in formats that
allow it to be shared, discovered, accessed and easily manipulated by third
party consumers. This approach requires the triplication of datasets, i.e., the
conversion of database schemata and their instances to a set of RDF triples. A
key issue in this process is deciding how to represent database schema concepts
in terms of RDF classes and properties. This is done by mapping database
concepts to an RDF vocabulary, used as the base for generating the triples. The
construction of this vocabulary is extremely important, because the more
standards are reused, the easier it will be to interlink the result to other
existing datasets. However, tools available today do not support reuse of
standard vocabularies in the triplication process, but rather create new
vocabularies. In this thesis, we present the StdTrip process that guide users in
the triplication process, while promoting the reuse of standard, RDF
vocabularies.
[11_MSc_pereira]
Rafael Silva PEREIRA.
A split&merge architecture for distributed video
processing in the Cloud. [Title
in Portuguese: Uma arquitetura de split&merge para processamento distribuído de
vídeo baseado em Cloud]. M.Sc. Diss. Eng. Presentation: 18/04/11. 76 p. Advisor:
Karin Koogan Breitman.
Abstract: The Map Reduce approach, proposed by
Dean& Ghemawat, is an efficient way for processing very large datasets using a
computer cluster and, more recently, cloud infrastructures. Traditional Map
Reduce implementations, however, provide neither the necessary flexibility (to
choose among different encoding techniques in the mapping stage) nor control (to
specify how to organize results in the reducing stage), required to process
video files. The Split & Merge tool, proposed in this thesis, generalizes the
Map Reduce paradigm, and provides an efficient solution that contemplates
relevant aspects of intense processing video applications.
[11_MSc_marroquinmogrovejo]
Renato Javier Marroquín Mogrovejo.
Experimental statistical analysis of MapReduce joins. [Title
in Portuguese: Análise estadístico experimental de junções MapReduce].
M.Sc. Diss. Port. Presentation: 28/06/11. 89 p. Advisor: Edward Hermann
Haeusler.
Abstract: There are different scalable data management
solutions which can take advantage of cloud features making them more attractive
for a deployment in such environments. One of the most critical operations in
data processing is joining datasets, but this operation is most of the time the
one that takes more time, and one of the hardest to optimize. In this work, we
explore statistical
methods in order to predict join queries execution times. In addition to that,
join selectivity estimation is explored in a MapReduce environment in order to
use it as another parameter in our model.
[11_PhD_espinha]
Rodrigo de Souza Lima ESPINHA.
Suporte topológico em paralelo para malhas de
elementos finitos em análises dinâmicas de fratura e fragmentação. [Title
in English: Parallel topological support for finite element meshes in dynamic
fracture and fragmentation analysIS].
Ph.D. Thesis. Port. Presentation: 05/04/11. 122 p. Advisor: Waldemar Celes
Filho.
Abstract: Fracture propagation and fragmentation phenomena in
solidis can be described by Cohesive Zone Models and simulated with the Finite
Element Method. Among the computational approaches of recent interest for
fracture representation in finite element meshes are those based on cohesive
elements. In those approaches, fracture behavior is represented by cohesive
elements inserted at the interfaces between volumetric (bulk) elements of the
original mesh. Cohesive element models can be classified into intrinsic or
extrinsic. Intrinsic models require pre-inserted cohesive elements at every
volumetric interface in which fracture is allowed to happen. On the other hand,
extrinsic models require that cohesive elements be adaptively inserted, wherever
and wherever necessary. However, the traditional mesh representation (elements
and nodes) is not sufficient for handling adaptative meshes, which makes an
appropriate topological support necessary. In general, cohesive models of
fracture also require a high level of mesh refinement near crack tips, such that
accurate results can be achieved. this implies in memory and processor
consumption that may be prohibitive for traditional workstations. Thus, parallel
environments become important for the solution of fracture problems. However,
due to the difficulties for thr parallelization of extrinsic models, the the
existing approaches use intrinsic models or implement extrinsic simulations
based on
pre-inserted cohesive elements or cohesive elements represented as attributes of
volumetric elements. In order to allow fracture and fragmentation simulations of
large models in a simple and efficient way, this thesis proposes the ParTopS
system, a parallel topological support for finite element meshes in dynamic
fracture and fragmentation analyses. Specifically, a compact and efficient
representation of distributed fracture meshes is presented. Cohesive elements
are explicitly represented and treated as regular elements in the mesh. Based on
the distributed mesh representation, we propose a scalable parallel algorithm
for adaptive insertion of cohesive elements in both bidimensional and
tridimensional meshes. Symmetrical topological operations are exploited in order
to reduce communication among mesh partitions. The ParTopS system has been
employed in order to parallelize existing serial extrinsic simulations. The
scalability and correctness of the parallel topological support is demonstrated
through computational experiments executed on a massively parallel environment.
The achieved results show that ParTopS can be effectively applied in order to
enable simulations of large models.
[11_PhD_correa]
Sand Luz CORREA.
Detecção estatística de anomalias de desempenho em sistemas
baseados em middleware. [Title
in English: Statistical detection of performance anomalies in middleware-based
systems].
Ph.D. Thesis. Port. Presentation: 01/04/11. 146 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: Middleware technologies have been widely
adopted by the software industry to reduce the cost of developing computer
systems. Nonetheless, predicting the performance of middleware-based
applications is difficult due to specific implementation details of a middleware
platform and a multitude of settings and services provided by middleware for
different deployment scenarios. Thus, the performance management of
middleware-based applications can be a non-trivial task. Autonomic computing is
a new paradigm for building self-managed systems, i.e., systems that seek to
operate with minimal human intervention. this work investigates the use of
statistical approaches to building autonomic management solutions to control the
performance of middleware-based applications. particulary, we investigate this
issue from three perspectives. the first is related to the prediction of
performance problems. We propose the use of classification techniques to derive
performance models to assist the autonomic management of distributed
applications. In this sense, different classes of models in statistical learning
are assessed in both offline and online learning scenarios. The second
perspective refers to the reduction of false alarms, seeking the development of
reliable mechanisms that are resilient to transient failures of the classifiers.
This work proposes an algorithm to augment the predictive power of statistical
learning techniques by combining them with statistical tests for trend detection.
Finally, thr third perspective is related to diagnosing the root causes of a
performance problem. For this context, we also propose the use of statistical
tests. The result presented in this thesis show that statistical approaches can
contribute to the development of tools that are both effective, as well as
efficient in characterizing the performance of middleware-based applications.
Therefore, these approaches can contribute decisively to different perspectives
of the problem.
[11_MSc_cerqueira]
Sérgio Luiz Ruivace CERQUEIRA.
Comparação de projeto baseado em agentes e
orientação a objetos na plataforma GeoRisc. [Title
in English: Comparison of agent and object oriented projects using the GeoRisc
platform].
M.Sc. Diss. Esp. Presentation: 05/04/11. 108 p. Advisor: Arndt von Staa.
Abstract:
There are several software development technologies currently in the literature.
Two such technologies are object orientation, which is consolidated, and agent
orientation, which has been the subject of many studies and experiments. These
studies indicate the agent orientation as very promising and an evolution of
object orientation. However, there is only a few studies comparing these two
techniques and these studies have been based on ideological and qualitative
comparisons. This dissertation aims to develop and evaluate methods of systemic
evaluation of two architecture for implementing systems. The two technologies
presented were compared and determined whether the use of technology has brought
benefits, disadvantages or was indifferent to the other. The comparison was
performed based on taking a real problem; in other words, two implementations
have been created that address the problem similary each using a technology. To
develop this work, it was created a measurement plan based on the technique Goal
Question Metric. The measurement plan was applied to both implementations and
results were evaluated by defining the benefits of each technique. Finally was
done a discussion about the use of the GQM model in a real project.
[11_MSc_buback]
Silvano Nogueira Buback.
Utilizando aprendizado de máquina para construção de uma ferramenta de apoio a
moderação de comentários. [Title
in English: Using machine learning to build a tool that helps comments
moderation].
M.Sc. Diss. Esp. Presentation: 01/09/11. 65 p. Advisors: Marco Antonio
Casanova and Ruy Luiz Milidiú.
Abstract: One of the main changes brought
by Web 2.0 is the increase of user participation in content generation mainly in
social networks and comments in news and service sites. These comments are
valuable to the sites because they bring feedback and motivate other people to
participate and to spread the content. On the other hand these comments also
bring some kind of abuse as bad words and spam. While for some sites their own
community moderation is enough, for others this impropriate content may
compromise its content. In order to help theses
sites, a tool that uses machine learning techniques was built to mediate
comments. As a test to compare results, two datasets captured from Globo.com
were used: the first one with 657.405 comments posted through its site and the
second with 451.209 messages captured from Twitter. Our experiments show that
best result is achieved when comment learning is done according to the subject
that is being commented.
[11_PhD_castro]
Thaís Helena Chaves de CASTRO.
Sistematização da aprendizagem de programação em
grupo. [Title
in English: Systematic approach for group programming learning].
Ph.D. Thesis. Port. Presentation: 24/04/11. 152 p. Advisor: Hugo Fuks.
Abstract:
The research reported here deals with devising structuring elements that may
broaden intervention opportunities from the teacher in a context of group
programming learning. Based on a set of case studies with freshmen in computing
courses a systematization for practices, methods and technologies was developed
producing an approach for supporting group programming based in three
investigation paths: pedagogocal assumptions, CSCL environments and
collaboration methods. The main learning rationale is Jean Piaget's Cognitive
Development Theory, used alongside group programming techniques commonly applied
in undergraduate introductory programming courses. Computational tools are used
to monitor and intervene during learning process and in such context, CSCL
environments encourage collaboration and regulate expected practices. In
this thesis other technologies like languages for agent representation and
patterning identification are also exploited for improving control and
facilitate interventions. Finally, as collaborations method, it is proposed a
Programming Progressive Learning Scheme that helps students to adopt
collaborative practices when solving exercices and that can be formalized to be
used with automated platforms.
[11_MSc_segura]
Vinicius Costa Villas Bôas SEGURA.
UISKEI: sketching the user interface and its
behavior. [Title
in Portuguese: UISKEI: desenhando a interface e seu comportamento].
M.Sc. Diss. Port. Presentation: 30/03/11. 95 p. Advisor: Simone Diniz
Junqueira Barbosa.
Abstract: During the early user interface design phase,
different solutions should be explored and iteratively refined by the design
team. In this rapidly evolving scenario, a tool that enables and facilitates
changes is of great value. UISKEI takes the power of sketching, allowing the
designer to convey his or her idea in a rough and more natural form of
expression, and adds the power of computing, which makes manipulation and
editing easier. More than an interface prototype drawing tool, UISKEI also
features the definition of the prototype behaviour, going beyond navigation
between user interface containers (e.g. windows, web pages, screen shots) and
allowing to define changes to the state of user interface elements
and widgets (enabling/disabling widgets, for example). This dissertation
presents the main concepts underlying UISKEI and a study on how it compares to
similar tools. The user interface drawing syage is detailed, explaining how the
conversion of sketches to widgets is made by combining a sketch recognizer,
which uses the Levenshtein distance as a similarity measure, and the
interpretation of recognized sketches based on an
evolution tree. Furthermore, it discusses the different solutions explored to
address the issue of drawing an interaction, suggesting an innovative
mind-map-like visualization approach that enables the user to express the event,
conditions and actions of each interaction case still keeping the pen-based
interaction paradigm in mind.