Theses and Dissertations
2009
ABSTRACTS
Departamento de Informática
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2009. They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*).
For any requests, questions, or suggestions, please contact:
Rosane Castilho
bib-di@inf.puc-rio.br
Last update: 12/APRIL/2010
[09_MSc_barbosa]
Amadeu Andrade BARBOSA JUNIOR. Implantação de componentes de software
distribuídos multi-linguagem e multi-plataforma. [Title
in English: Deployment of distributed, multi-language and multi-platform
component-based software].
M.Sc. Diss. Port. Presentation: 27/08/09. 113 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: This work presents an infrastructure for remote and decentralized
deployment of distributed, multi-language and multi-platform component-based
applications. Unlike other component deployment systems, this infrastructure
supports the local installation of static dependencies, which are managed by a
packaging system, similar to those used in Linux-based operating systems. In
order to simplify the management of the execution environment and the
component’s life-cycle, this infrastructure provides an API that allows the
deployment planning and the physical mapping by incremental level of details.
The incremental level of details promote the modularisation of deployment plans
and allow the automatic, semi-automatic or fully manual mapping of components in
the execution environment and the physical resources. This provides a better
control over the distribution of resources to the developer, if needed. The
design of this infrastructure aims to provide a basis for future work, as the
development of new strategies for automatic deployment and the integration with
architecture description languages.
[09_PhD_silva]
Anderson Oliveira da SILVA.
Global mobility architecture. [Title
in Portuguese: Arquitetura de mobilidade global].
Ph.D. Thesis. Eng. Presentation: 2/07/09. 128 p. Advisors: Luiz Fernando Gomes
Soares e Sérgio Colcher.
Abstract: Handover optimization is one of the areas of growing scientific
interest concerning mobility in communication networks. The duration of service
disruption and the amount of packet loss during handover periods directly impact
on communication performance and are critical to applications in which QoS is an
essential factor. The traditional mobile IP protocol (MIPv6) makes IP mobility
possible but does not accomplish the requisites for real-time, interactive or
delay sensitive applications. To optimize the L3 handover process, some
enhancements were standardized, like HMIP and FMIP. These protocols contribute
to improve the L3 handover but their strategies are focused on the downstream
flow. To address the problem of the latency to restore the upstream flow, two
strategies can be flowed: (i) to propose changes or adaptations to these
protocols; or (ii) to propose new mobility architecture with new mobility
control protocol. In this work, we propose the Globla Mobility Architecture (GMA),
introducing its functional entities and mobility support operations by means of
the GMA Mobility Protocol (GPM). When the GMA is compared to traditional and
enhanced IP mobility architectures, many advantages can be observed, such as:
(i) simplification of the signaling
protocol on the mobile node; (ii) optimization of the binding update procedure;
(iii) optimization of the registering procedure; and (iv) optimization of the L3
handover. Then, this work presents a performance evaluation of the mobility
architectures above-mentioned and the GMA based on the following factors: (i)
packet loss during a session; (ii) average handover latency to restore the
downstream flow during a session; (iii) average handover latency to restore the
upstream flow during a session; and (iv) buffer size requirement for buffering
and forwarding mechanisms. Finally, this work presents the result analysis of
the performance evaluation and documents, in its appendixes, the handlers of the
mobility manager entity, the GMP message format and the types of PDUs.
[09_MSc_bnunes]
Bernardo Pereira NUNES. Classificação automática de dados semi-estruturados. [Title
in English: Automatic classification of semi-structured data].
M.Sc. Diss. Port. Presentation: 3/04/09. 92 p. Advisor: Marco Antonio Casanova.
Abstract: The problem of data classification goes back to the definition of
taxonomies covering knowledge areas. With the advent of the Web, the amount of
data available has increased several orders of magnitude, making manual data
classification impossible. This dissertation proposes a method to automatically
classify semi-structured data, represented by frames, without any previous
knowledge about structured classes. The dissertation introduces an algorithm,
based on K-Medoid, capable of organizing a set of frames into classes,
structured as a strict hierarchy. The classification of the frames is based on a
closeness criterion that takes into account the attributes and their values in
each frame.
[09_MSc_soares]
Bruno de Castro Bahia Alvarenga SOARES. Um método e um framework para o
planejamento empírico de sistemas multiagentes auto-organizáveis. [Title
in English: A method and a framework for empirical planning of self-organizing
systems].
M.Sc. Diss. Port. Presentation: 7/08/09. 77 p. Advisor: Carlos José Pereira
de Lucena.
Abstract: Software architectures have been recently adopting self-organizing
strategies to design dynamic and adaptive decentralized systems capable of
executing without interference from the user, from the administrator or from
other systems. This self-organization process usually results in unpredicted
emergent behaviors resulting from the interaction between the agents. Emergent
behaviors can be beneficial or harmful to the system execution and organization.
Because of this, multiagent systems will only be adopted into domains that
involve risk - like industries, hospitals, military equipments, etc. - if one
can guarantee that it will be able to fulfill its design goals. However, it
is not easy to guarantee this, since it is impossible to map all the possible
behaviors of a system modeled through agent interaction. Some approaches have
already been proposed, but their use is usually extremely complex. Therefore,
this work presents a new method for the experimental verification of
self-organizing multiagent systems, evolving the empiric verification technique
proposed by Kevrekidis and complementing it through an autonomic approach linked
to the simplicity of the verification through online planners. As a result, a
framework for designing, verifying and simulating multiagent systems is
presented.
[09_MSc_borsato]
Bruno de Souza e Silva BORSATO.
Reputação de agentes aplicada ao mercado
simulado de capitais. [Title
in English: Software agents' reputation applied to simulated stock exchange
market].
M.Sc. Diss. Port. Presentation: 5/08/09. 67 p. Advisors: Carlos José Pereira
da Lucena.
Abstract: The use of software agents' reputation concepts in multi-agent systems
can assist the agents in these systems to choose more trustworthy interaction
partners, facilitating the achievement of these agents' goals, through more
accurate and weighed decisions. In stock markets domain, this aid is basic for
the proper discernment of the quality of the numerous sources of information
available to an investor. A multi-agent system that made possible the simulation
of stock exchange markets, as well as the full interaction among participating
agents through the exchange of reputation information, would serve as a
foundation for the use, study, test and improvement of concepts relating to
software agents' reputation and artificial intelligence. This Master's Thesis
aims at presenting the aforementioned system, as well as details of its
architecture and implementation, characteristics of its actors and the
interactions between them, and indications of strategies that can be adopted in
the development of agents. The proposed system's goal is to serve as a stimulus
to the academic development of multi-agent system technology, by carrying out
competitions involving agents' developers.
[09_MSc_rezende]
Bruno Loureiro REZENDE. Um framework para a automação de testes com linguagens
de especificação configuráveis. [Title
in English: A framework for test automation with configurable specification
languages].
M.Sc. Diss. Port. Presentation: 13/08/09. 82 p. Advisor: Arndt von Staa.
Abstract: In this work we create a framework for automated testing according to
test driven development practices that can be extended to support new
specification languages, following ideas from behavior driven development. We
expect that, by using this framework, software development teams will be able to
specify tests in more proper languages for the application domain, with the
support of a tool that allows such level of customization. The framework was
instantiated with the creation of a language based on state machines, and used
on a real-life project. The goal of this work is that software project teams
become able to specify tests in the most fitting language for their application
domain, through the support of a tool that makes possible such level of
customization. The motivation for this work comes from the experience with the
development of control systems, usually with real-time requirements, which often
have their behaviors described by state machines, possibly the best language for
this domain.
[09_PhD_silvestre]
Bruno Oliveira SILVESTRE. Modelos de concorrência e coordenação para o
desenvolvimento de aplicações orientadas a eventos em Lua. [Title
in English: Concurrency and coordination models for event-driven in Lua].
M.Sc. Diss. Port. Presentation: 14/08/09. p. Advisor: Noemi da La Roque
Rodriguez.
Abstract: Multithreading has become popular as a way to organize concurrent
tasks and achieve better performance from CPUs. However, programming with
threads is not an easy task. Usage of shared resources needs to be coordinated
because concurrent access to them, in most cases, may create inconsistency in
the application. The event-driven model has been pointed as a good alternative
to multithreaded programming. In this model, a task is performed by one or more
events, where a main loop is responsible for receiving and dispatching these
events. In this work, we investigate a model to combine event-driven and
preemption in Lua without bringing back all the concurrent programming problems.
We also investigate how the language’s characteristics can be used to provide
flexible coordination mechanisms. These characteristics can aid, for example, to
create new mechanisms based on the ones already existent.
[09_MSc_nunes]
Camila Patrícia Bazílio NUNES.
Avaliação da modularidade e estabilidade de
técnicas de implementação para linhas de produtos de sistemas multi-agentes. [Title
in English: Modularity and stability assessment of implementation techniques for
multi-agent systems product lines].
M.Sc. Diss. Port. Presentation: 31/03/09 118 p. Advisors: Carlos José Pereira
de Lucena and Uirá Kulesza.
Abstract: A Multi-Agent System Product Line (MAS-PL) defines a Software Product
Line (SPL) architecture whose design and implementation are accomplished using
software agents to adress its common and variable features. The MAS-PL
development can be performed theough MAS specific platforms and implementations
techniques. Examples of such techniques are: object-oriented frameworks,
conditional compilation, configuration files and aspect-oriented programming (AOP).
However, the existing empirical studies do not focus on MAS-PL approach,
considering different implementation techniques and MAS specific platforms. In
this context, this work presents a systematic comparison of different
variability implementation techniques of agent features in the MAS-PL domain.
This systematic comparison involved the use of two platforms of MAS development
(JADE and Jadex) and implementation techniques: conditional compilation,
configuration files and AOP. In this study, a suite of software metrics were
used to evaluate quality attributes, such as modularity and stability. In order
to perform this study, two MAS-PLs were developed. The first one was the Expert
Committee MAS-PL, a product line of conference management systems. The second
one was the OLIS MAS-PL, which provides several personal services to the users.
The collected data during the accomplished empirical studies allowed to report a
set of lessons learned.
[09_MSc_campos]
Carlos André Tavares CAMPOS. Comportamento das componentes de sistemas
multi-toque baseados em reflexão interna total confinada. [Title
in English: Components of a multitouch system based on frustrated total internal
reflection].
M.Sc. Diss. Port. Presentation: 12/02/09. 98 p. Advisor: Marcelo Gattass.
Abstract: Multitouch systems are becoming more popular replacing traditional
WIMP user interfaces. A multitouch system can be based on frustrated total
internal refraction of the light in an acrylic surface. With the use of
commodity projectors and cameras these systems are now becoming widely
affordable. Naturally the quality of these systems depends upon the right choice
of material, lights, cameras and projectors. Giving its importance, there is a
need for more information on this subject in the literature. This thesis
presents the general design of a multitouch system and a detailed discussion of
the expected behavior of each component. A prototype was built to evaluate the
proposed solution. Results are discussed to support some conclusions and
suggestions of future work.
[09_MSc_mendes]
Carlos Raoni de Alencar MENDES. Códigos de cobertura: limites e heurísticas. [Title
in English: Covering codes: bounds and heuristics].
M.Sc. Diss. Port. Presentation: 10/08/09. 76 p. Advisors: Marcus Vinicius
Soledade Poggi de Aragão and Emerson Luiz do Monte Carmelo.
Abstract: Data compression speech coding, mobile telecommunications and
error-correction are some of the practical applications of the covering codes
study, an important field of coding theory. This work addresses two problems
of covering codes: the classic code covering problem and the recent short code
covering problem. It presents an application of Reactive Tabu Search (RTS)
metaheuristic for the problems cited, the RTS is an important variation of the
classic Tabu Search. Moreover, it presents a new heuristic technique for solving
combinatorial optimization problems named Column Generation Improvement
Heuristic (CGIH). It also presents an application of CGIH for the covering codes
problems. The CGIH combines the delayed column generation, technique used to
solve problems with a large number of decision variables (columns), and local
search heuristics. A comparison of results obtained by the Reactive Tabu Search,
the Tabu Search without the reaction mechanism and CGIH is also presented in
order to assess
the effectiveness of the presented heuristics.
[09_MSc_lustosa]
Cecília Reis Englander LUSTOSA. 2-category and proof theory. [Title
in Portuguese: 2-Categoria e teoria da prova].
M.Sc. Diss. Eng. Presentation: 20/08/09. 70 p. Advisor: Edward Hermann
Haeusler.
Abstract: Natural Deduction for intuitionistic logic has been related to
Category Theory by what now is known as Categorical Logic. This relationship is
strongly based on the Curry-Howard Isomorphism between Natural Deduction and
typed (lambda)-Calculus. This dissertation describes some aspects of these
relationship with the aim of proposing a 2-categorical view of categorical logic.
We show that even under this 2-categorical view some of the drawbacks already
known in ordinary Category Theory remain holding. We conclude this dissertation
discussing the advantages of 2-categorical view under some weaker assumptions.
[09_MSc_palomo]
Cesar Morais PALOMO.
Interactive image-based rendering for virtual view
synthesis from depth images. [Title
in Portuguese: Renderização interativa baseada em imagens para síntese de vistas
virtuais a partir de mapas de cor e profundidade].
M.Sc. Diss. Eng. Presentation: 6/07/09. 52 p. Advisor: Marcelo Gattass.
Abstract: Image-based modeling and rendering has been a very active research
topic as a powerful alternative to traditional geometry-based techniques for
image synthesis. In this area, computer vivion algorithms are used to process
and interpret real-world photos or videos in order to build a model of a scene,
while computer graphics techniques use this model to create photorealistic
images based on the captured photographs or videos. The purpose of this work is
to investigate rendering techniques capable of delivering visually accurate
virtual views of a scene in real-time. Even though this work is mainly focused
on the rendering task, without the reconstruction of the depth map, it
implicitly overcomes common errors in depth estimation, yielding virtual views
with an acceptable level of realism. Tests with publicly available datasets are
also presented to validate our framework and to illustrate some limitations in
the IBR general approach.
[09_MSc_intrator]
Chantal INTRATOR.
Using scripts to improve Web accessibility. [Title
in Portuguese: Utilizando scripts para melhorar a acessibilidade na Web].
M.Sc. Diss. Eng. Presentation: 13/07/09. 103 p. Advisor: Clarisse Sieckenius
de Souza.
Abstract: As more and more resources become available online, the World Wide Web
is turning into an important stakeholder in every individuals' lives.
Nevertheless, not every segment of the distinc populations worldwide is able to
freely access and use it. A new approach of navigating the Web, in which users
make use of automated processes created by a community of volunteers, is
presented in this dissertation. The primary intent of this approach is to help
blind and functionally-illiterate users in navigating the web and accessing its
resources. This, though, is only the starting point for further investigations
with other populations with special needs. This dissertation presents a
collaborative web system, designed on top of an existing tool, for improving Web
accesibility.
[09_PhD_santos]
Cícero Nogueira dos SANTOS. Entropy guided transformation learning. [Title in
Portuguese: Aprendizado de transformações guiado por entropia]. Ph.D. Thesis.
Eng. Presentation: 20/03/09 85 p. Advisor: Ruy Luiz Milidiu.
Abstract: This work presents Entropy Guided Transformation Learning (ETL), a new
machine learning algorithm for classification tasks. ETL generalizes
Transformation Based Learning (TBL) by automatically solving the TBL bottleneck:
the construction of good template sets. ETL uses the information gain in order
to select the feature combinations that provide good template sets. ETL provides
an effective way to handle high dimensional features. It also enables the
inclusion of the current classification feature in the generated templates. In
this work, we also present ETL Committee, an ensemble method that uses ETL as
the base learner. We also show that ETL can use the template evolution strategy
to accelerate transformation learning. We describe the application of ETL to
four languages independent Natural Language Processing tasks: part-of-speech
tagging, phrase chunking named entity recognition and semantic role labeling.
Overall, we apply it to thirteen different corpora in six different languages:
Dutch, English, German, Hindi, Portuguese and Spanish. For each one of the tasks,
the ETL modeling phase is quick and simple. It only requires the training set
and a simple initial classifier. Our extensive experimental results demonstrate
that ETL is an effective way to learn accurate transformation rules. Using a
common parameter setting, ETL shows better results than TBL with handcrafted
templates for the four tasks. Moreover, the template evolution strategy provides
a significant training time reduction for all corpora. Our experimental results
also show that ETL Committee improves the effectiveness of ETL classifiers.
Using the ETL Committee approach, we obtain state-of-the-art competitive
performance results in the thirteen corpus-driven tasks. We believe that by
avoiding the use of handcrafted templates, ETL enables the use of transformation
rules to a greater range of classification tasks.
[09_PhD_ALO]
Claudia Cappelli ALÓ. Uma abordagem para transparência em processos
organizacionais utilizando aspectos. [Title
in English: An approach to business processes transparency using aspects].
Ph.D. Thesis. Eng. Presentation: 10/08/09. 328 p. Advisor: Julio Cesar Sampaio
do Prado Leite.
Abstract: Transparency has been a desire of democratic societies for a long
time. The right to be informed and have access to information has been a major
problem in modern societies. The demand for truth based on transparency has
increased in the context of global change. The importance of openness in the
flow of information is creating an open society in which the very idea is to
establish a democratic society with engaged citizens able to understand and use
the information that is accessible to them. However, it is not sufficient for an
organization to wish to be transparent. Organizations need to know what
transparency is and how they can apply this concept to their business. This
thesis defines transparency and a way of using it in business processes, using
these processes as a means of making explicit the transparency within
organizations. An aspect oriented approach is proposed in order to allow the
introduction of transparency characteristics into business processes. Policies
and standards are proposed as well as a technique for inserting new elements
into the business processes using a transparency catalog. In order to validate
the proposed definition of transparency, an analysis of organizational
processes, using two surveys, was performed. A case study, using the business
processes from of a real organization, was performed to validate the proposed
approach. This case study produced some preliminary results on the applicability
and feasibility of using this approach.
[09_PhD_vasconcelos]
Cristina Nader VASCONCELOS.
Algoritmos para processamento de imagens e visão
computacional para arquiteturas paralelas em placas gráficas. [Title in English:
Image processing and computer vision algorithms for graphic cards parallel
architectures] Ph.D. Thesis.
Port. Presentation: 23/03/09. 155 p. Advisor: Marcelo Gattass.
Abstract: Arithmetically intensive operations, replicated over huge data sets
(usually image pixels or scanned data), are an important part of many Computer
Vision (CV) tasks, making them good candidates for taking advantage of the
processing power of contemporary graphics processing units (GPUs). This thesis
formulates a set of CV algorithms that use low level representations of visual
content and are tailored for running on GPUs. A general view of GPUs and
parallel programming patterns that offers interesting building blocks for CV
tasks provides the necessary background for the algorithms. We also propose a
formal definition for the Multiple Reduction pattern and evaluate its efficiency
according to different reduction factors and layouts. We present two techniques
for extracting data from the image space using the GPU: MOCT, a technique for
tracking a set of objects identified by their colors from natural videos, and
MRR, a technique for distributing the evaluation of a set of operators defined
over different regions of interest within an image. As a MRR application we
describe a Centroidal Voronoi Diagram construction based on Lloyd's algorithm
but entirely computed using GPU resources. We also deal with visual content
representations as pixel agglomerations, more specifically, as Regional
Quadtrees. We introduce the QuadN4tree: a new model for representing quadtree
leaves that allows navigation through their neighborhood systems and achieves an
optimal cost for the retrieval of neighbor sets. We also propose modifying the
setup for CV applications based on energy minimization via graph cuts,
introducing a preprocessing step that groups similar pixels into regional
quadtree leaves. This modification aims to reduce the size of the graph for
which a minimum cut is to be found. We apply our proposed method to the problem
of natural image segmentation by active illumination. Published papers detailing
some contributions of this dissertation are included as appendixes. They present
data-parallel formulations for the CV tasks we describe.
[09_MSc_rocha]
Daniel Amaral de Medeiros ROCHA. Combinando metaeurísticas com resolvedores MIP
com aplicações ao Generalized Assignment Problem (GAP). [Title
in English: Combining metaheuristics with MIP solvers with applications to the
Generalized Assignment Problem (GAP)].
M.Sc. Diss. Eng. Presentation: 11/08/09. 71 p. Advisor: Marcus Vinicius
Soledade Poggi de Aragão.
Abstract: Methods that mix strategies usually found in metaheuristic algorithms
with techniques to solve mixed integer programming problems (MIPs) have had
great results over the past few years. This work proposes two new algorithms in
this philosophy: one is based on the Path Relink (PR) metaheuristic, while the
other one is a simple algorithm that does post-processing in the solutions found
by the MIP solver. Both algorithms use a new neighborhood
structure, called ellipsoidal neighborhood, that has strong resemblances with
the relinking step from PR algorithms and that, in this work, is generalized and
extended for multiple solutions. The generalized assignment problem (GAP) is
used for the computational experiments. Also tested are a MIP solver (ILOG CPLEX
version 11) and a branch and price algorithm that uses the RINS and guided dives
heuristics. The tested algorithms are compared among themselves and with
GAP-specific heuristics. The results are satisfatory and show that the
ellipsoidal neighborhoods can frequently improve the solutions found by the MIP
solver, even finding the best result for some instances.
[09_MSc_rego]
Diego Cedrim Gomes REGO.
Execução otimizada de transações financeiras: um estudo
empírico. [Tittle in English: optimized financial trade execution: an empirical
study] M.Sc. Diss. Port. Presentation: 05/03/09 107 p. Advisor: Eduardo Sany
Laber.
Abstract: We present a comparative empirical study for the Optimized Trade
Execution problem in moderns financial markets. We build a financial market
simulator and then, based on this tool, we compare the performance of many
strategies available in the literature. The best results were achieved by
strategies that make use of machine learning techniques.
[09_MSc_andrea]
Eduardo Fonseca de ANDRÉA. Monitorando o ambiente de execução de componentes de
software distribuídos. [Title
in English: Monitoring the execution environment of distributed software
components].
M.Sc. Diss. Port. Presentation: 13/04/09. 92 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: Component-based systems are characterized by the construction of
applications through the composition of available software artifacts.
Interactions may occur between different components that can be distributed
through several machines. As distributed applications increase in size, the
interactions between the various nodes that comprise them become more complex.
Therefore it is important for distributed component systems to monitor the
interactions between components in order to identify failures and bottlenecks in
processing and communication. This dissertation presents an architecture capable
of offering extensible mechanisms for monitoring the execution environment of
distributed components, and the interactions between their components. It also
presents a flexible mechanism for publication of the collected information, and
some comparative test to measure the performance penalty imposed by the
infrastructure to the application.
[09_MSc_almentero]
Eduardo Kinder ALMENTERO. Re-engenharia do software C&L para plataforma
Lua-Kepler utilizando princípios de transparência. [Title
in English: Re-engineering of C&L software to Lua-Kepler platform using
transparency principles].
M.Sc. Diss. Port. Presentation: 8/04/09. 112 p. Advisor: Julio Cesar Sampaio
do Prado Leite.
Abstract: Transparency is a keyword present in different contexts such as the
economic and the political ones, and, currently, one of the new contexts, in
which it stands, is software. Free (open source) software is a good example of
transparency, where the great advantage is that one can access the source code
and then choose the characteristics he/she wants, but, in this way, we will be
serving only those who understand the source code. Understanding software
source code can be an arduous task, especially if no technique has been used for
facilitate reading. In this work we explore a method for developing free
software based on the use of scenarios. The result of applying this method is a
single document, the source code, in which the scenarios will be integrated
within the code, making it easier to read and understand, thus bringing more
transparency to the software. This method was refined during its application to
reengineer the "C&L" software. In order to produce additional documentation,
besides the scenarios embebedded in the code, we used the LEL (Language Extended
Lexicon) technique to map the namespace of the new "C&L".
[09_MSc_amorim]
Eduardo Telles CARLOS.
Frustum culling híbrido utilizando CPU e GPU. [Title
in English: Hybrid frustum culling using CPU and GPU].
M.Sc. Diss. Port. Presentation: 8/04/09. 99 p. Advisor: Marcelo Gattass.
Abstract: The definition of visibility is a classical problem in Computer
Graphics. Several algorithms have been developed to enable the visualization of
huge. and complex models. Among these algorithms, the frustum culling, which
plays an important role in this ares, is used to remove invisible objects by the
observer. Besides being very usual in applications, this algorithm has been
improved in order to accelerate its execution. Although being treated as a
well-solved problem in Computer Graphics, some points can be enhanced yet, and
new forms of culling may be disclosed as well. In massive models, for example,
algorithms of high performance are required, since the calculus
arises considerably. This work analyses the frustum culling algorithm and its
optimizations, aiming to obtain the state-of-the-art algorithm implemented in
CPU, as well as explains the influence of each of its steps in massive models.
Based on this analysis, new GPU (Graphics Processing Unit) based frustum culling
techniques will be developed and compared with the ones using only CPU. As a
result, a hybrid frustum culling will be proposed, in order to achieve the best
of CPU and GPU processing.
[09_MSc_carvalho]
Elaine Alves de CARVALHO. Heurísticas para identificação de requisitos de data
warehouses a partir de indicadores de desempenho. [Title
in English: Heuristics for data warehouse requirements identification using
performance measures].
M.Sc. Diss. Port. Presentation: 27/11/09. 128 p. Advisor: Rubens Nascimento
Melo.
Abstract: Organizations need to change and evolve, but for that it is necessary
to make the righ decisions. For this decision, companies are using Information
Technology (IT) as a fundamental part to support their decisions. An essential
IT component to improve the process of decision making is the data warehouse. In
order to fulfill its role well, the data warehouse must be well defined. There
are various approaches that try to improve the task of identifying data
warehouses requirements, but few explore the contributions of Business
Processess Engineering (BPE) in the process of requirements gathering. This
dissertation studies how to improve data warehouses requirements elicitation
using performance indicators allied to business processess. For this it is
suggested a set of heuristics designed to guide performance measures
identification and data warehouse requirements discovery. The heuristics are
applied in a case to facilitate understanding of suggested approach in this
work.
[09_MSc_amorim]
Evelin Carvalho Freire de AMORIM.
NCE: Um algoritmo para extração de conteúdo de
páginas de notícias. [Title
in English: An algorithm for content extraction from news pages].
M.Sc. Diss. Port. Presentation: 31/03/09. 83 p. Advisor: Eduardo Sany Laber.
Abstract: The entity extraction of web pages is commonly used to enhance the
quality of tasks performed by search engines, like duplicate pages and ranking.
The relevance of entity extraction is crucial due to the fact that search
engines have to deal with fast growing volume of information on the Web. There
are many algorithms that detect entities in the literature, some using site
level strategy and others using page level strategy. The site level strategy
uses many pages from the same site to create a model that extracts templates.
The page level strategy creates a model to extratc templates according to
features of the page. Here we present an algorithm, called NCE (News Content
Extractor), that uses a page level strategy and its objective is to perform
entity extraction on news pages. It uses features from a DOM tree to search for
certain entities, namely, the news title and news body. Some measures are
presented and used to evaluate how good NCE is. When we compare NCE to a page
level algorithm that uses visual features, NCE shows better execution time and
extraction quality.
[09_PhD_queiroz]
Fábio Mascarenhas de QUEIROZ.
Optimized compilation of a dynamic language to a
managed runtime environment. [Title
in Portuguese: Compilação otimizada de uma linguagem dinâmica para um ambiente
de execução gerenciado].
Ph.D. Thesis. Port. Presentation: 4/09/09. 97 p. Advisor: Roberto
Ierusalimschy.
Abstract: Managed runtime environments have become popular targets for compilers
of high-level programming languages. They provide a high-level type system with
enforced runtime safety, as well as facilities such as garbage collection,
possibly sandboxed access to services of the underlying platform, multithreading,
and a rich library of data structures and algorithms. But managed runtime
environments lack a clear performance model, which hinders attempts at
optimizing the compilation of any language that does not have a direct mapping
to the runtime environments' semantics. This is aggravated if the language is
dynamically typed. We assert that it is possible to build a compiler for a
dynamic language that assert that targets a managed runtime environment so that
it rivals a compiler that targets machine code directly in efficiency of the
code it generates. This dissertation presents such a compiler, describing the
optimizations that were needed to build it, and benchmarks that validate these
optimizations. Out
optimizations do not depend on runtime code generation, only on information that
is available from the source program. We use a novel type inference analysis to
increase the amount of information available.
[09_MSc_maffra]
Fabíola Alvares Rodrigues de Souza MAFFRA. Detecção de características faciais
utilizando FERNS. [Title
in English: Facial features detection based on FERNS].
M.Sc. Dissertation. Port. Presentation: 20/08/09. 66 p. Advisor: Marcelo
Gattass.
Abstract: Over the last decades, face detection and facial features detection
have received a great deal of attention from the scientific community, since
these tasks are essential for a number of important applications, such as face
recognition, face tracking, human-computer interaction, face expression
recognition, security, etc. This proposes the use of a classifier based on FERNS
to recognize interest points across images and then detect and track the facial
features. We begin with a brief review of the most common approaches used in
facial features detection and also the theory around the FERNS. In addition, an
implementation of the facial features detection based on FERNS is present to
provide results and conclusions. The method proposed here relies on an offtime
training phase during which multiple views of the keypoints to be matched are
synthesized and used to train the FERNS. The facial features detection is
performed on images acquired in real-time from many different viewpoints and
under different lighting conditions.
[09_MSc_quintella]
Felipe Ferreira QUINTELLA.
DWeb3D: um toolkit para facilitar a criação e
manipulação de cenas 3D usando X3D. [Title
in English: DWeb3D: a toolkit to help the creation and use of 3D scenes using
X3D].
M.Sc. Dissertation. Port. Presentation: 20/08/09. 80 p. Advisor: Marcelo
Gattass.
Abstract: This work studies the challenges of 3D applications on the web. It
analizes the current scenario of 3D Web applications, the reasons of the low
adoption of exixtent solutions, what works well, and what doesn't work. It is
then suggested a new approach for the construction of 3D worlds and interactive
applications. The study is focused on the X3D standard because it is open,
supported by an international consortium, mature and in constant development,
but
with a low adoption rate. The X3D qualities and problems are discussed and
correlated with other solutions. In this process it was detected some
necessities in current applications and the complexity of X3D to deal with these
issues. As an attempt to demonstrate that the complexity of X3D in some aspects
may be reduced, the DWeb3D toolkit was built. DWeb3D is a toolkit to help the
development of dynamic X3D applications. It was created as a way to demonstrate
that it is possible to facilitate the development process, increasing the access
to developers in this area. The toolkit provides tools to deal with publishing,
synchronism, interactivity, multiple users management and disk persistency.
[09_PhD_carvalho]
Felipe Gomes de CARVALHO. HybridDesk: uma abordagem para transições entre
interfaces em um sistema híbrido semi-imersivo. [HybridDesk: an approach for
transitions between interfaces in a hybrid semi-immercive system] Ph.D. Thesis. Port. Presentation:
13/02/09 158 p. Advisors: Alberto Barbosa Raposo and Marcelo Gattass.
Abstract: The post-WINP (Windows, Icons, Menu and Pointer) user interfaces are
bringing new interaction modalities and the use of new input and output devices.
Many of these new interfaces are not yet mature and issues related
with a clear definition of an application context and technological requirements
are still under investigation. The study of the relationship of the properties
of interaction devices and their influence on the performance of 3D tasks (navigation,
manipulation, and selection) is an important factor to identify adequate setups
for carrying out these tasks. However, in a broader context, each task can be
decomposed into sub-tasks which technological demands can be a challenge, since
it requires multiple interaction environments, as well as transitions between
them. These transitions are relevant because they can either orient or disorient
the user during the overall task execution. Thus, this thesis aims to propose a
technological setup (a set of interaction devices) to integrate the advantages
of different functional environments. The transitional interactions between them
are explored by investigating the design properties (dimensional congruency and
properties of continuity) during the 3D annotation task execution. In order to
achieve such a goal, a semi-immersive environment composed of 3 functional
environments was develop. That composition includes a set of devices enabling
the tasks of text edition (2D), navigation and manipulation (3D). Assuming that
3D annotation forces transitions between the integrated functional spaces, an
exploratory study was conducted with users to investigate the behavior of
interaction during these transitions by identifying the influence of
the design properties discussed in this work.
[09_MSc_napolitano]
Fillipe Machado Pinto NAPOLITANO. Uma estratégia baseada em simulação para
validação de modelos em i*. [Title in English: A simulation-based validation
strategy to i* models] M.Sc. Diss. Eng. Presentation: 11/08/09 65 p. Advisor:
Julio Cesar Sampaio do Prado Leite.
Abstract: The understanding of the organization before starting the development
of organizational systems has been very effective in requirements elicitation.
In this context, the use of intentionality concepts through the i asterisk
framework has been widely used by researchers and by some companies. However,
the use of i asterisk framework to modeling organizational intentionality brings
the inherent complexity of the models. Thus, this work has the main objective to
develop a simulation–based strategy to help the requirements engineer to
validate the i asterisk models elicitated with the stakeholders, without
absorbing the inherent complexity of these models. We also present the results
of the strategy implementation in a case study.
[09_MSc_santanna]
Francisco Figueiredo Goytacaz SANT'ANNA. A synchronous reactive language based
on implicit invocation. [Title
in Portuguese: Uma linguagem reativa síncrona baseada em invocação implícita].
M.Sc. Dissertation. Port. Presentation: 16/03/09. 65 p. Advisor: Roberto
Ierusalimschy.
Abstract: The reactive programming paradigm covers a wide range of applications,
such as games and multimedia systems. Mainstream languages neglect reactive
programming, lancking language-level primitives that focus on synchronism and
interactions within application parts. We propose a new reactive synchronous
language, with an imperative style, whose primitives are based on unconventional
implicit invocation mechanisms. With this work, we intend to unite the essential
features of reactive languages while keeping a convenient imperactive style of
programming. A reactive scheduler is responsible for executing reactors, our
processing units, based on dependency relations between them built dynamically.
Our language provides dataflow programming, sequential imperactive execution,
and deterministic use of shared-memory.
[09_MSc_medeirosjunior]
Gilberto Paiva de MEDEIROS JUNIOR. Composição de imagens com iluminação HDR
através de um aprimoramento da técnica de renderização diferencial. [Title in
English: Image compositing using HDR illumination through an improvement to the
technique of differential rendering]. M.Sc. Diss.
Port. Presentation: 13/02/09 127 p. Advisor: Bruno Feijó.
Abstract: This work aims to present an improvement to the image compositing
state-of-the-art technique using Differential Rendering. An application is
developed using HDR images, in which combining virtual and real elements into a
single scene produces better results than the ones obtained by the original
technique. Firstly, this work presents some techniques of image compositing, and
concepts on HDR images and image-based lighting. Next, the original technique by
Paul Debevec is presented through its calculations and examples of image
compositing. The proposed improvement adds one more image to the set of images
used by the original technique. This improvement is of great importance for
applications that need a high degree of realism, because it allows a better
integration between the virtual and real parts of a scene. Several test are
presented that prove the effectiveness of the proposed ãomprovement.
[09_MSc_soares]
Gleidson Fonseca SOARES. Algoritmos primais e duais para o problema das
p-medianas. [Title
in English: Primal and dual algorithms for the p-median problem].
M.Sc. Dissertation. Port. Presentation: 13/04/09. 107 p. Advisor: Marcus
Vinicius Soledade Poggi de Aragão.
Abstract: A facility is any center that offers services to a set of clients. It
may be, among others, a school, a factory or a depot. Facility location problems
are combinatorial optimization problems that handle decisionmaking in respect to
the positioning of those services, optimizing some defined criteria. The
measures often used to assess the quality of a solution for this class of
problems relate to which clients are served by which facility. An immediate
consequence is the strong relationship between location problems and data
clustering. One of the widely studied facility location problems is the
uncapacited p-median problem (UPM), the main subject of this thesis. Given a set
of possible facility locations, the UPM consists in determining a subset of
locations at which the facilities shall be established, minimizing the sum of
distances from each client to its closest open facility. The UPM belongs to the
class of NP-hard problems and is a central problem of data clustering. This
thesis presents primal, dual and exact algorithms for approaching the UPM,
focusing on the development of dual and exact algorithms. Five constructive
heuristics and one local search method were implemented. Furthermore, three new
dual methods and one exact method were proposed. The result is the analysis of a
set of techniques to solve the problem. The choice of best technique is strongly
dependent of the configuration of the treated instance. We obtained the optimum
for some instances and for others the difference between the value of the lower
and upper bounds in the best cases do not exceed 3%.
[09_MSc_baptista]
Gustavo Luiz Bastos BAPTISTA.
Gerenciamento de mobilidade e tratamento de
desconexão baseado em SIP. [Title in English: Mobility management and
disconnection handling based on SIP] M.Sc. Diss. Port. Presentation: 13/02/09 100 p.
Advisors: Markus Endler and Vagner Jose do Sacramento Rodrigues.
Abstract: Advances in computer networks, telecommunications, and portable mobile
devices have increased the demand for applications and services that are
suitable for environments with intermittent connectivity and mobility of devices.
A central question for the viability of such applications on environments like
these are the possible solutions for Mobility Management, i.e., the automatic
maintenance of the connectivity between system components in scenarios where the
devices change their IP addresses dynamically as they reconnect to different
network domains. Starting from an initial investigation of the main solutions
for Mobility Management, this dissertation presents the development of a
solution on the application layer based on the SIP protocol. Then, it presents
the adaptation of an
exixting publish/subscribe system to make it use the development Mobility
Management solution, in order to provide support for the mobility and
disconnection of event producers and consummers, and also NAT and firewall
traversal, enabling the system to be used on the Internet, and not only on local
networks. The publish/subscribe model has been chosen as a case study for the
implemented solution, because it provides asynchronous and anonymous
interactions that are very well suited to the requirements of a mobility
scenario. The adaptation of the publish/subscribe system started from an
investigation of the main requirements a system like this should meet in order
to support mobility and disconnection of devices. The referred system has been
tested for different scenarios, and its performance has ben evaluated for
different configurations, and it has been compared to the conventional publish/subscribe
system.
[09_PhD_mello]
Hélcio Bezerra de MELLO.
Propostas de roteamento para redes veiculares (VANETs)
em ambientes urbanos. [Title in English: A traffic-light aided routing protocol
for vehicular networks]. Ph.D. Thesis. Port. Presentation: 28/01/09 125 p. Advisor:
Markus Endler.
Abstract: VANETs (Vehicle Ad Hoc NETworks) are a special case of mobile ad hoc
networks where vehicles are equiped with wireless communication interfaces.
These vehicles may move at high speeds and data transmission in urban scenarios
may easily be blocked by buildings and other sort of obstacles. Such factors
contribute to make inter-vehicle communication intermitent and packet routing
more difficult. One of the main challenges faced by routing protocols is
avoiding low-traffic streets, where the lack of vehicles tend to make packet
forwarding impossible. For this reason, traffic information on each street is
essential for the computation of the best route between any given two vehicles.
Specifically in urban scenarios, traffic light transitions cause significant
fluctuations on traffic flow over time. Given this fact, this thesis proposes
TLAR (Traffic Light Aided Routing), a new routing algorithm for VANETs that
exploits traffic light transition timings in order to determine which streets
will offer the greatest probabilities for successful packet forwarding.
Simulation results indicate a good performance of this algorithm compared to
exixting approaches.
[09_MSc_fonseca]
Hubert Aureo Cerqueira Lima da FONSECA. Um middleware baseado em componentes
para adaptação dinâmica na plataforma Android. [Title
in English: A component-based middleware for dynamic adaptation on the Android
platform].
M.Sc. Dissertation. Port. Presentation: 11/08/09. 74 p. Advisor: Markus
Endler.
Abstract: Mobile applications should have the ability to adapt their behaviour
according to changes in their context. Specific or spontaneous user requests,
variations in the availability of system resources, like energy or wireless
connectivity, or changes of the user's location are possible reasons for such
adaptations, that usually aim to adjust the application's operation to the new
context, optimize its performance or personalize its user interface. Aiming to
offer greater facility for implementing dynamically adaptative mobile
applications, the Kaluana middleware defines a service-oriented component based
model. This model supports dynamic component composition, reconfiguration and
deployment. Applications executed on the middleware can compose Kaluana
components at execution time. Therefore, these applications are dynamically
adaptive, taking advantage of the model features. The components development is
faster due to usage of computational reflection tools and service orientation
concepts that provide adequate abstractions to the developer. This way, dynamic
adaptable applications built upon Kaluana consist on compositions of these
software components. Kaluana was implemented on the top of Android platform and
was tested for the development of map based location-aware mobile applications.
[09_MSc_nunes]
Ingrid Oliveira de NUNES. A domain engineering process for developing
multi-agent systems product lines. [Title in Portuguese: Um processo de
engenharia de domínio para o desenvolvimento de linhas de produto de sistemas
multi-agentes]. M.Sc. Diss. Eng. Presentation: 31/03/09. 138
p. Advisors: Carlos José Pereira de Lucena and Uirá Kulesza.
Abstract: Multi-agent System Product Lines (MAS-PLs) have emerged to integrate
two promising trends of software enginering: Software Product Lines (SPLs) and
Agent-oriented Software Engineering. The former came to promote reduced
time-to-market, lower development costs and higher quality on the development of
applications that share common and variable features. These applications are
derived in a systematic way, based on the exploitation of application
commonalities and large-scale reuse. The later is a new software engineering
paradigm that addresses the development of Multi-agent Systems (MASs), which are
composed by autonomous and pro-active entities, named agents. This paradigm aims
at helping on the development of complex and distributed systems, characterized
by a large number of parts that have many interactions. The main goal of MAS-PLs
is to incorporate the benefits of both software engineering disciplines and to
help the industrial exploitation of agent technology. Only recent research work
aimed at integrating SPL and MAS. In this context, this work presents a domain
engineering process for developing MAS-PLs. We define the process activities
with their tasks and work products, in order to provide a process that addresses
modeling agent variability and agent features traceability. Our approach is the
result of an investigation of how current SPL, MAS and MAS-PL approaches can
model MAS-PLs. Based on this experience, we propose our process, which includes some techniques and notations of some of these approaches: PLUS
method, PASSI methodology and MAS-ML modeling language. In particular, the
scenario we are currently exploring is the incorporation of autonomous and
pro-active behavior into existing web systems. The main idea is to introduce
software agents into existing web applications in order to (semi)automate tasks,
such as proving autonomous recommendation of products and information to users.
[09_MSc_bertran]
Isela Macia BERTRAN.
Avaliação da qualidade de software com base em modelos UML.
[Title in English: Evaluation of software quality based on UML models].
M.Sc. Diss. Port. Presentation: 25/03/09 131 p. Advisors: Arndt von Staa and
Claudio Nogueira Sant'Anna.
Abstract: One of the goals of software engineering is the development of high
quality software at a small cost an in a short period of time. In this context,
several techniques have been defined for controlling the quality of software
designs. Furthermore, many metrics-based mechanisms have been defined for
detecting software design flaws. Most of these mechanisms and techniques focus
on analyzing the source code. However, in order to reduce unnecessary rework it
is important to use quality analysis techniques that allow the detection of
design flaws earlier in the development cycle. We believe that these techniques
should analyze design flaws starting from software models. This dissertation
proposes: (i) a set of strategies to detect, in UML models, specific and
recurrent design problems: Long Parameter List, God Class, Data Class, Shotgun
Surgery, Misplaced Class and God Package; (ii) and the use of QMOOD quality
model to analize class diagrams. To automate the application of these mechanisms
we implemented a tool: the QCDTool. The detection strategies and QMOOD model
were eveluated in the context of two experimental studies. The first study
analyzed the accuracy, precision and recall of the proposed detection strategies.
The second study analyzed the utility of use QMOOD quality model in the class
diagramms. The results of the first study have shown the benefits and drawbacks
of the application in class diagrams of some of the proposed detection
strategies. The second study shows that it was possible to identify, based on
class diagrams, variations of the design properties and consequently, of the
quality attributes in the analyzed systems.
[09_PhD_magalhaes]
João Alfredo Pinto de MAGALHÃES. Recovery oriented software. [Title
in Portuguese: Software orientado à recuperação].
Ph.D. Thesis. Eng. Presentation: 3/09/09. 104 p. Advisor: Arndt von Staa.
Abstract: Recovery oriented software is built with the perspective that hardware
or software failures as well as operation mistakes are facts to be coped with,
since they are problems that cannot be fully solved while developing real
complex applications. Consequently, any software will always have a non-zero
chance of failure. Some of these failures may be caused by defects that could be
removed or encapsulated. A key issue is to increase the detectability of errors,
in other words, increase the self-awareness of the software's behavior. In this
work, we present the
results of systematically applying already well known techniques (design by
contract, self-cheking software, software components, debuggable software,
design for testability, mock components and patterns) with the intent of
creating recovery oriented software. Measuring the development of five different
real-time and real word applications, we analyzed the effects of the adoption of
these techniques. In particular we observed the balancing of the effort spent in
different development stages and explore the "reduncancy of reasoning" concept
that, as well as providing a higher detectability and debuggability, also leads
to enhancing quality-by-construction. The results were encouraging since they
were systematically better than those reported in the literature and were
achived at a feasible cost.
[09_PhD_viterbo]
José VITERBO FILHO.
Decentralized reasoning in
ambient intelligence. [Title
in Portuguese: Inferência decentralizada em ambientes inteligentes].
Ph.D. Thesis. Eng. Presentation: 3/09/09. 114 p. Advisor: Markus Endler.
Abstract: Ubiquitous computing features the seamless integration of computer
systems into the everyday lives of users to provide information and
functionalities anytime and anywhere. Such systems encompass different kinds of
sensors and mobile devices interconnected through a combination of several
wireless network technologies. A particular trend in this area is exploring the
Ambient Intelligence (AmI) paradigm, which aims at the integration of innovative
technologies to create computer-mediated environments that support user
activities through specific services, with minimal user intervention. In AmI
systems, reasoning is fundamental for triggering actions or adaptations
according to specific situations that may be meaningful and relevant to some
applications. Most middleware systems adopt a centralized approach for their
reasoning mechanisms. In AmI environments, however, these reasoning operations
may need to evaluate context data collected from distributed sources and stored
in different devices, as usually not all context data is readily available to
the reasoners within a ubiquitous system. The goal of this thesis is to propose
a decentralized reasoning approach for performing rule-based reasoning about
context data targeting AmI systems. For this sake, we defined a context model
assuming that in AmI environments context data distributed over two sides, the
user side, represented by the users and their mobile devices, and the ambient
side, represented by the fixed computational infrastructure and ambient services.
We formalized the cooperative reasoning operation - in which two entities
cooperate to perform decentralized rule-based reasoning - and defined
a complete process to perform this operation. Finally, to show the feasibility
of this approach, we designed, implemented and evaluated a middleware service
supporting decentralized reasoning based cooperative reasoning process.
[09_PhD_duarte]
Julio César DUARTE.
O algoritmo Boosting at Start e suas aplicações. [Title in
English: The Boosting at Start algorithm and its applications]. Ph.D. Thesis.
Port. Presentation: 8/09/09. 87 p. Advisor: Ruy Luiz Milidiú.
Abstract: Boosting is a Machine Learning that combines several weak classifiers
with the goal of improving the overall accuracy. In each iteraction, the
algorithm updates the example weights and builds an additional classifier. A
simple voting scheme is used to combine the classifiers. The most famous
Boosting-based algorithm is AdaBoost. This algorithm increases the weights of
the examples that were misclassified by the previous classifiers. Thus, it
focuses the additional classifier on the hardest examples. Initially, an uniform
weight distribution is assigned to the examples. However, there is no guarantee
that this is the best choice for the initial distribution. In this work, we
present Boosting at Start (BAS), a new Machine Learning approach based on
Boosting. BAS generalizes AdaBoost by allowing the use of an arbitrary initial
distribution. We present schemes for the determination of such distribution. We
also show how to adapt BAS to Semi-supervised learning schemes. Additionally, we
describe the application of BAS in different problems of data and text
classification, comparing its performance with the original AdaBoost algorithm
and some state-of-the-art algorithms for such tasks. The experimental results
indicate that a simple modelling using the BAS algorithm generates effective
classifiers.
[09_MSc_duarte]
Leonardo Seperuelo DUARTE. Simulação de grãos em GPU. [Title
in English: Grains simulation on GPU].
M.Sc. Diss. Port. Presentation: 14/12/09. p. Advisor: Waldemar
Celes Filho.
Abstract: Not available yet.
[09_MSc_fracalanza]
Livia Fonseca FRACALANZA.
Mineração de dados voltada para recomendação no âmbito
de marketing de relacionamento. [Title in English: Recommendation based on data
mining for relationship marketing]. M.Sc. Diss. Port. Presentation: 02/04/09 59 p.
Advisor: Marco Antonio Casanova.
Abstract: Cross-selling is a strategy to recommend products to customers based
on theirpast purchases or the purchases of other customers with the same profile.
The best known algorithm for the analysis of a client shopping basket is known
in the literature as market basket analysis. This dissertation discusses the
discovery of sequential patterns in large databases and aims at implementing an
efficient algorithm that transforms the shopping cart problem into a maximum
clique problem. First, input data is transformed into a graph and maximum
cliques are detected to discover the most frequent relationship between the
items on the transaction. The dissertation also includes experiments that
evaluate the efficiency of the algorithm for large data volumes.
[09_PhD_leme]
Luiz André Portes Paes LEME. Conceptual schema matching based on similarity
heristics. [Title in Portuguese: Alinhamento de esquemas conceituais baseado em
heurísticas de similaridade]. Ph.D. Thesis. Eng.. Presentation: 23/03/09 106 p. Advisor: Marco
Antonio Casanova.
Abstract: Schema matching is a fundamental issue in many database applications,
such as query mediation, database integration, catalog matching and data
warehousing. In this thesis, we first address how to match catalogue schemas. A
catalogue is a simple database that holds information about a set of objects,
typically classified using terms taken from a given thesaurus. We introduce a
matching approach, based on the notion of similarity,which applies to pairs of
thesauri and to pairs of list of properties. We then describe matchings based on
cooccurence of information and introduce variations that explore certain
heuristics. Lastly, we discuss experimental results that evaluate the precision
of the matchings introduced and that measure the influence of the heuristics. We
then focus on the more complex problem of matching two schemas that belong to an
expressive OWL dialect. We adopt an instance-based approach and, therefore,
assume that a set of instances from each schema is available. We first decompose
the problem of OWL schema matching into the problem of vocabulary matching and
the problem of concept mapping. We also introduce sufficient conditions
guaranteeing that a vocabulary matching induces a correct concept mapping. Next,
we describe an OWL schema matching technique based on the notion of similarity.
Lastly, we evaluate the precision of the technique using data available on the
Web. Unlike any of the previous instance-based techniques, the matching process
we describe uses similarity functions to induce vocabulary matchings in a
non-trivial way, coping with an expressive OWL dialect. We also illustrate,
through a set of examples, that the structure of OWL schemas may lead to
incorrect concept mappings and indicate how to avoid such pitfalls.
[09_MSc_albuquerque]
Luiz Fernando Fernandes de ALBUQUERQUE. Avaliação de algoritmos online para
seleção de links patrocinados. [Title
in English: Online algorithms analysis for sponsored links selection].
M.Sc. Dissertation. Port. Presentation: 11/12/09. p. Advisor: Eduardo Sany
Laber.
Abstract: Not available yet.
[09_PhD_loaizafernandez]
Manuel Eduardo LOAIZA FERNANDEZ. Calibração de múltiplas câmeras baseada em um
padrão invariante. [Title
in English: Multiple camera calibration based on invariant pattern].
Ph.D. Thesis. Port. Presentation: 20/08/09. 131 p. Advisor: Marcelo Gattass
and Alberto Barbosa Raposo.
Abstract: Calibration of multiple cameras is an important step in the
installation of optical tracking systems. The accuracy of a tracking system is
directly related to the quality of the calibration process. Several calibration
methods have been proposed in the literature in conjunction with the use of
artifacts, called calibration patterns. These patterns, with shape and size
known, allow the capture of reference points to compute camera parameters. To
yield good results these poinst must be uniformly distributed over the tracking
area. The determination of the reference points in the image is an expensive
process prone to errors. The use of a good calibration pattern can reduce these
problems. This thesis proposes a new multiple camera calibration method that is
efficient and yields better results than previously proposed methods available
in the literature. Our method also proposes the use of a new simple calibration
pattern based on perspective invariant properties and useful geometric
properties. This pattern yields robust reference point identification and more
precise tracking. This thesis also revisits the multiple calibration process and
suggests a framework to compare the existing methods including the one proposed
here. This framework is used to produce a flexible implementation that allows a
numerical evaluation that demonstrates the benefits of the proposed method.
Finally the thesis presents some conclusions and suggestions for further work.
[09_MSc_alencar]
Marcus Franco Costa de ALENCAR. Composição de métodos de avaliação de IHC para
ambientes virtuais híbridos: um estudo de caso com a HybridDesk. [Title
in English: Composition of HCI evaluation methods for hybrid virtual
environments: a case study with HybridDesk].
M.Sc. Dissertation. Port. Presentation: 21/08/09. 157 p. Advisor: Simone Diniz
Junqueira Barbosa and Alberto Barbosa Raposo.
Abstract: Interaction design has been finally recognized as a key factor for a
broader acceptance and the effective utilization of applications and systems.
Virtual environment (VE) applications are being developed in growing numbers and
aiming to fulfill even more diverse and innovative needs. But the knowledge
about interaction design and evaluation of VE applications is still recent and
lacks well established methods and techniques. The focus of this study is the
communicability and usability evaluation of the HybridDesk, an equipment
designed for the interaction with VEs that includes three different interaction
environments. The communicability evaluation method (CEM), based on the semiotic
engineering, is an innovative and recent method, performing a qualitative
evaluation focused in the reception by the user of the designer
metacommunication message, at user interaction time, aiming to identify
communicability issues and to produce the semiotic profiling of the
metacommunication. There are several usability evaluation methods, both
qualitative and quantitative, including some already applied to VEs. This study
has applied some of these methods, favoring a qualitative evaluation, although
also providing some quantitative results. It includes an heuristic evaluation as
well as usage observation sessions with a focus on the user perception of the
interaction, aiming to identify HCI issues in hybrid virtual environments. This
study also produces a comparison of the evaluation results of the various
methods applied, demonstrating that they all contribute in a distinct way to the
evaluation results of the HybridDesk. The results also highlight and reinforce
the importance of some interaction aspects, like the compatibility among the
various signification systems involved during user interaction, and the need to
look for the seamless integration of 1D, 2D and 3D tasks from the user
perspective.
[09_MSc_santos]
Paulo Ivson Netto SANTOS. Ray tracing dynamic scenes on the GPU. [Title
in Portuguese: Traçado de raios de cenas dinâmicas na GPU].
M.Sc. Dissertation. Port. Presentation: 23/03/09. 61 p. Advisor: Waldemar
Celes Filho.
Abstract: We present a technique for ray tracing dynamic scenes using the GPU.
In order to achieve interactive rendering rates, it is necessary to use a
spatial structure to reduce the number of ray-triangle intersections performed.
Whenever there is movement in the scene, this structure is entirely rebuilt.
This way, it is possible to handle general unstructured motion. For this purpose,
we have developedan algorithm for reconstructing Uniform Grids entirely inside
the graphics hardware. In addition, we present ray-traversal and shading
algorithms fully implemented on the GPU, including textures, shadows and
reflections. Combining these techniques, we demonstrate interactive ray tracing
performance for dynamic scenes, even with unstructured motion and advanced
shading effects.
[09_MSc_gomes]
Paulo Roberto GOMES.
Um estudo sobre avaliação da execução do BLAST em ambientes
distribuídos. [Title
in English: A study about evaluation of BLAST execution in distributed
environments].
M.Sc. Dissertation. Port. Presentation: 8/04/09. 128 p. Advisor: Sérgio
Lifschitz.
Abstract: BLAST tools are typically used to make comparisons between sequences
of DNA, RNA and proteins. However, given the exponential growth of the
biological databases, there is concern about the performance of BLAST, even
considering the equipment of large computing power that exists today.
Considering this fact, some tolls to run BLAST in distributed environments such
as clusters and grids, have been developed to greatly accelerate its
performance. However, until now, has not been found in existing literature, no
study in order to compare the performance between
these tools. The performance evaluation of these tools is usually done in
isolation, considering only the execution time (elapsed time) in different
situations, for example, varying the number of nodes in the tool BLAST runs.
Craving a more detailed investigation, especially with regard to performance
evaluation of BLAST in distributed environments, this dissertation has as one of
your goals make a detailed study to compare the performance of BLAST in a
distributed environment, considering for such the evaluation of three tools
BLAST, among them the balaBLAST developed in the Bioinformatics Laboratory of
PUC-Rio. The second objective is to verify the effectiveness of load balancing
performed by the tool balaBLAST.
[09_MSc_teixeira]
Pedro Henriques dos Santos TEIXEIRA.
Data stream anomaly detection through
principal subspace tracking. [Title
in Portuguese: Deteção de anomalia em fluxo de dados através de rastreamento do
subespaço principal].
M.Sc. Dissertation. Port. Presentation: 4/09/09. 94 p. Advisor: Ruy Luiz
Milidiú.
Abstract: The complexity of data centers and the high volume of generated
monitoring data poses many challenges to system administrators. Current tools
rely on experts to configure fixed thresholds for each data stream, which is not
efficient nor appropriated for dynamic systems. We study an unsupervised
learning technique based on spectral analysis proposed for anomaly detection. In
this technique, just one pass over the data is allowed. It tracks the principal
subspace of r dimensions from N numeral data streams. An anomaly is considered
to be a sudden change in system behaviour and indicated by a change in the
number of latent variables. A previous approach relies on a PAST-type subspace
tracker, which is based on an inverse matrix update and is known to be unstable.
It requires an extra normalization step in order to guarantee the expected
reconstruction error, which really adds O(Nr2) in time complexity per update
instead of the O(Nr) advertised. In this work, we present FRAHST, the first
rank-adaptive principal subspace tracker based on the new recursive
row-Householder algorithm, which is the state-of-the-art for an algorithm of
this kind. It is stable, provides orthonormal basis and has a true dominant
complexity of only O(Nr) flops. Our technique reduces in 75% the number of false
positives when compared against the previous subspace technique
in the public ABILENE PACKETS dataset while still doubling the number of
detections. By embedding lagged values to the input vector to explore temporal
correlations, FRAHST successfully detects subtle anomalous patterns in the data
and when compared against four other anomaly detection techniques, it is the
only one with a consistent F1 score above 80% in all four datasets. As part of
this work, a real-time system is successfully implemented to monitor the
infrastructure of a ISP, and is a good use case for unsupervised anomaly
detection in the industry.
[09_MSc_vieira]
Pedro Sampaio VIEIRA. Um sistema de modelagem 3D de coluna vertebral baseado em
imagens de raios-x. [Title
in English: A spine 3D modeling system based on xray images].
M.Sc. Dissertation. Port. Presentation: 21/08/09. 59 p. Advisor: Marcelo
Gattass.
Abstract: Research involving computer graphics and laboratory exams has
contributed much to the quality of the Medical diagnose. One aspect of these
researches is directly related to 3D reconstruction of anatomical structures of
the human body, especially the spine.The sedentary lifestyle and the high
dependence of computers have increased the postural problems of the population.
Therefore, new techniques for 3D reconstruction based on Computed tomography (CT),
magnetic resonance imaging (MRI) and x-ray images are required, in order to make
the clinical evaluation increasingly accurate. This work proposes a 3D modeling
system based on x-ray images that yields a virtual spine model. The recovery of
three-dimensional information of each vertebra helps improve the assessment
currently made using only 2D images. The technique used here is based on stereo
radiographic. The use of x-ray images instead of CT, significantly reduces the
exposure time of the patient to radiation, and is more useful to the general
population due to its lower cost. The results presented here show good accuracy
despite its lower cost. The proposed method has ahieved results very close to
those based on expensive CT or MRI where the input image isbetter than x-ray
images.
[09_MSc_meloni]
Raphael Belo da Silva MELONI.
Classificação de imagens de sensoriamento remoto
usando SVM. [Title in English: Remote sensing image classification using SVM]
M.Sc. Diss. Port. Presentation: 25/03/09 64 p. Advisor: Ruy Luiz
Milidiu.
Abstract: Image classification is an information extraction process in digital
images for pattern and homogeneous objects recognition. In remote sensing it
aims to find patterns from digital images pixels, covering an area of earth
surface, for subsequent analysis by a specialist. In this dissertation, to this
images classification problem we employ Support Vector machines, a machine
learning methodology, due the possibility of working with large quantities of
features. We built classifiers to the problem using different image information,
such as RGB and HSB color spaces, altimetric values and infrared channel of a
region. The altimetric values contributed to excellent results, since these
values are fundamental characteristics of a region and they were not previously
considered in remote sensing images classification. We highlight the final
result, for the identifying swimming pools problem, when neighborhood is two.
The results have 99% accuracy, 100% precision, 93.75% of recall, 96.77% F-Score
and 96.18% of Kappa index.
[09_PhD_maia]
Renato Figueiró MAIA.
Uma avaliação do uso de linguagens dinâmicas no
desenvolvimento de middleware: um estudo de caso da implementação de um ORB na
linguagem Lua. [Title
in English: An evaluation of the use of dynamic languages in middleware
development: a case study of an ORB implementation in the Lua language].
Ph.D. Thesis. Port. Presentation: 6/04/09. 165 p. Advisor: Renato Fontoura de
Gusmão Cerqueira.
Abstract: The focus of middleware research and development has changed from high
performance to more flexibility. A similar change has happened in the
application develpment scenario with the recent popularization of dynamic
languages. In this work, we present an analysis of the use of dynamic languages
in the implementation of middleware through a use case. Such case is the design
and implementation of OiL, a middleware in the Lua language based on the model
of didtributed objects, which follws a modular implementation based on
reconfigurable components. OiL was developed during this work with the purpose
of taking adantage of specific Lua features that enable a simple, flexible, and
efficient implementation. We present an analysis of OiL's implementation that
identifies Lua's characteristics that contributed for a more flexible
implementation, that is, an implementation easier to change. This analysis is
based on the Cognitive Dimensions of Notations, a framework for evaluation of
notations and languages, which is used in this work as the evaluation criteria
to measure how easy it is to adapt OiL's implementation in different scenarios.
We also compare the performance of OiL with other similar implementations in
languages that are less dynamic, such as C++ and Java.
[09_PhD_rocha]
Ricardo Couto Antunes da ROCHA. Context management for distributed and dynamic
context-aware computing. [Title in Portuguese: Gerenciamento de contexto para
computação sensível ao contexto em ambientes distribuídos e dinâmicos]. Ph.D. Thesis.
Eng. Presentation: 06/02/09. 100 p. Advisor: Markus Endler.
Abstract: In context-aware computing, applications perform adaptations at the
occurrence of pre-defined context-based situations. Research in context-aware
computing has produced a number of middleware systems for context management,
i.e. to intermediate the communication between applications and sensor/agents
that generate context information. However, development of ubiquitous
context-aware applications is still a challenge because most current midleware
systems are still focused on isolated and static contex-aware environments. For
example, applications typically require global knowledge of all underlying
context management systems in order to start context-based interactions.
Moreover, context-aware environments are inherently dynamic as a result of
occasional additions or upgrade of sensors, applications or context inference
mechanisms. The main challenge of this scenario is to accommodate such changes
in the environment without disrupting running context-aware applications. Some
middleware approaches that tackle some of the mentioned problems, do not
properly support other aspects, such as generality and scalability of context
management. This thesis argues that in distrubuted and dynamic environments,
context-aware applications calls for context interests of variable wideness,
i.e. primitives for describing context-based conditions that involve context
types and context management systems that cannot be defined in advance. To
achieve this goal, this thesis proposes a novel architecture for context
management based on the concept of context domains, allowing applications to
keep context interests across distributed context management systems. To
demonstrate the feasibility of the approach, this thesis describes a distributed
middleware that implements the aforementioned concepts, without compromising
scalability and efficiency of context access. This middleware allows the
development of context-aware applications for mobile devices, and runs on two
platforms: Android and Java J2ME CDE 1.1.
[09_MSc_szczerbacki]
Ricardo SZCZERBACKI. Uso de técnicas baseadas em pontos para visualização de
horizontes sísmicos. [Title
in English: Using point based techniques for seismic horizons visualization].
M.Sc. Dissertation. Port. Presentation: 10/02/09. 50 p. Advisor: Marcelo
Gattass.
Abstract: Seismic horizon visualization stands as an important knowledge area
used to support exploration on the oil industry. Different techniques currently
employed to render this kind of surfaces are usually based on polygonal
meshes generation, which benefits from graphics boards optimization on drawing
triangles. This work is an evaluation of Point Based rendering techniques to
replace polygonal approaches in seismic horizons visualization. To do so, this
study revisits each stage of the seismic visualization process. The algorithm
adopted here is based on the Surface Splatting with the EWA filter. This work
also presents a study on normal evaluation and data structures to store points
and normal. Special care is taken in shading techniques. The implementation
yielded results that are used to support te evaluation of the Point Based
Techniques on real 3D Seismic data. Traditional triangle based rendering is also
presented to compare results.
[09_MSc_gomes]
Roberta Claudino Barreto Pessanha GOMES. Uma linha de produto de sistemas de
gerenciamento de projetos de software baseados em agentes. [Title
in English: A software product line of project management systems based on
agents].
M.Sc. Diss. Port. Presentation: 14/12/09. p. Advisor: Carlos José Pereira
de Lucena.
Abstract: Not available yet.
[09_MSc_couto]
Rodnei Silva COUTO. Uma meta-ferramenta de geração de diagramas utilizada na
engenharia reversa de sistemas legados. [Title
in English: A meta-tool for generating diagrams used in the reverse engineering
of legacy systems].
M.Sc. Diss. Port. Presentation: 4/08/09. 92 p. Advisors: Arndt von Staa and
Gustavo Robichez de Carvalho.
Abstract: The recovery of the documentation of the structure of a legacy system
aims at supporting its understanding and maintenance. Based on diagrams that
describe the structure of the system as it was implemented, it is easier to
understand the system and analyze the impact changes may have. This dissertation
introduces a meta-tool that uses metadata for its instantiation aiming at
specific representations. An experimental study on the recovery of models of a
legacy system implemented in PL/SQL was conducted to enable the evaluation of
the meta-tool and the reverse engineering process that it supports.
[09_MSc_henrique]
Rodrigo Carneiro HENRIQUE. Uma arquitetura de serviços de acesso a dados
estruturados em aplicações científicas. [Title
in English: An architecture for structured data access services in scientific
applications].
M.Sc. Diss. Port. Presentation: 8/04/09. 54 p. Advisors: Renato Fontoura de
Gusmão Cerqueira and Carlos Roberto Serra Pinto Cassino.
Abstract: Scientific applications usually handle large amout of data that have a
proprietary and complex representation. This characteristics represent a great
challenge for sharing data between scientific applications. The main goal of
this work is to provide an architecture of software services that allows a
flexible and efficient access to large amount of data served by such
applications. Case estudies are presented to show the flexibility that we can
achieve with this architecture. These experiments are strongly based in actual
data used in scientific applications developed by Tecgraf/PUC-Rio. We also
present an evaluation of different techniques of data encoding based on
experiments conducted to measure the performance achieved by an implementation
of the proposed architecture.
[09_MSc_fernandes]
Rodrigo Costa FERNANDES. Registro de sísmica 3D a dados de poços. [Title
in English: Registration of 3D seismic to well data].
M.Sc. Diss. Port. Presentation: 21/08/09. 71 p. Advisors: Marcelo Gattass.
Abstract: Data acquired directly from borehole are more reliable than seismic
data, and then, the first can be used to adjust the second. This work proposes
the correction of a volume of seismic amplitudes through a three step algorithm.
The first step is the identification of common features in both sets using a
pattern recognition algorithm. The second step consists of the generation and
the optimization of a mesh aligned with the features in the volumetric data
using a new algorithm based on image processing and computational intelligence.
The last step is the seismic-to-well registration using a point-to-point
volumetric deformation achieved by a radial basis function interpolation. The
dissertation also presents some results from 2D and 3D implementations allowing
conclusions and suggestions for future work.
[09_MSc_araujo]
Samur Felipe Cardoso de ARAÚJO.
Explorator: uma ferramenta para exploração de
dados RDF baseado em uma interface de manipulação direta. [Title in English:
Explorator: a tool for exploring RDF data through direct manipulation]. M.Sc. Diss. Port. Presentation: 02/02/09 127 p. Advisor: Daniel Schwabe.
Abstract: In this dissertation we propose a tool for Semantic Data exploration.
We developed an exploration model that allows users without any a prior
knowledge about the data domain to explore an RDF database. So that, we
presented an operation model, that supported by an interface based on the direct
manipulation and query-by-example paradigm, allows users to explore an RDF base
to both gain knowledge and answer questions about a domain, through navigation,
search and others exploration mechanisms. Also, we developed a facet
specification model and a mechanism for automatic facet extraction that can be
used in the development of facet navigation systems over RDF. The final product
of this work is a tool called Explorator that we are proposing as an environment
for Semantic Web data exploration.
[09_MSc_jena]
Sanjay Dominik JENA. A mixed integer programming approach for sugar cane
cultivation and harvest planning. [Title in Portuguese: Uma abordagem de
programação inteira mista para o planejamento de cultivo e colheita de
cana-de-açúcar]. M.Sc. Diss. Eng. Presentation: 18/03/09. 153 p. Advisor: Marcus Vinicius Soledade Poggi de Aragão.
Abstract: Mathematical Programming techniques such as Mixed Integer Programming
(MIP), one of today's principal tools in Combinatorial Optimization, allows
fairly close representation of industrial problems. In fact, contemporary MIP
solvers are able to solve large and difficult problems from practice. Whereas
the use of MIP in industrial sectors such as logistics, packing or chip design
is widely common, its practice in agriculture is still relatively young. However,
planning of agricultural cultivation and harvesting is a complex task. Sugar
cane is one of the most important agricultural commodities of Brazil, the
worldwide largest producer of this crop that is used to procedure sugar and
alcohol. Most of the planning methods in use, manual or computer aided, still
result in high waste of resources, on field and in commercialization. the
purpose of this work is to provide decision support, based on Optimization
techniques, for sugar cane cultivation and harvesting. A decision support system
is implemented. It divides the planning into a tactical and an operational
planning. The problem is proved to be NP-hard and determines the best moment to
harvest the fields, maximizing the total profit given by the sugar content
within the cane. It considers resources such as cutting and transport crews,
processing capacities in sugar cane mills, the use of maturation products and
the application of vinasse on harvested fields. The MIP model extends the
classical Packing formulation, incorporating a network flow for the harvest
scheduling. Several pre-processing techniques are used to reduce the problem
size. Heuristically obtained initial solutions are passed to the solver in order
to facilitate the solution. A problem segregation strategy based on semantic
information is presented leading to very competitive results. As the linear
relaxation optimum solution turned out to be highly fractional, this work also
invests in valid inequalities in order to strengthen the MIP formulation.
Several further solution approaches such as Local Branching and heuristics based
on the optimum of the linear relaxation were explored. One of Brazil's large
sugar cane producers was involved in the entire development process in order to
guarantee a realistic presentation of the processes. All experiments were
performed with instances from practice provided by this producer.
[09_PhD_bim]
Sílvia Amélia BIM.
Obstáculos ao ensino dos métodos de avaliação da Engenharia
Semiótica. [Title
in English: Obstacles to the teaching of the Semiotic Engineering evaluation
methods].
Ph.D Thesis. Port. Presentation: 13/08/09. 181 p. Advisors: Clarisse
Sieckenius de Sousa and Carla Faria Leitão.
Abstract: This thesis presents the results of a qualitative research about the
difficulties that are present in the teaching and learning process of two
Human-Computer Interaction (HCI) evaluation methods. It concerns the Semiotic
Inspection Method (SIM) and the Communicability Evaluation Method (CEM) both
proposed by Semiotic Engineering to analyze the quality of the communication
between the designer and the user through the interface of interactive systems.
The research consisted of fourteen interviews and the analysis of the
development of three HCI disciplines, from which three sets of results were
mapped and analyzed: (i) practical difficulties mainly related to the limited
time to work with a extensive volume of HCI topics, to crowed classes and to the
lack of teaching material and examples of the methods´ application; (ii)
difficulties in the development of three abilities heavily dependent on one
another and required to the methods learning – systematic interpretation,
abstraction and broad view; and (iii) teaching initiatives to minimize the
identified difficulties. These results are interpreted in relation to both the
difficulties specific to the Semiotic Engineering context and to those that are
shared by other HCI approaches and by classical and fundamental Computer Science
areas. The difficulties to interpret, abstract and to construct a broad view of
a problem are far from being specific to the teaching of the methods under
investigation, are serious obstacles for the education of many areas and stages
of training in Computer Science. Specific or general solutions of a particular
area are, in fact, contributions to the development of abilities that, once
acquired, will make the teaching and learning process of other areas, where
these abilities are also essential, easier. Thus, the identification and
analysis of difficulties found in the Semiotic Engineering context are
contributions that may benefit both the teaching process of the theory and its
methods and the teaching of other topics and areas of Computer Science. The main
contributions of this work are related, thus, to the identification of some
factors that must guide the reflection about the academic and professional
training in Computer Science, and serve as reference to the formulation of
strategies and to the development of teaching material to its diverse areas
[09_MSc_aguiar]
Thiago Valente AGUIAR. Visualização da fronteira entre fluidos utilizando o
método SPH e o algoritmo de marching cubes. [Title
in English: Visualization of the boundary between fluids using the SPH method
and the Marching Cubes algorithm].
M.Sc. Diss. Port. Presentation: 17/09/09. 58 p. Advisors: Bruno Feijó.
Abstract: In engineering and special effects for film and television, fluid
modeling is frequently used to simulate natural phenomena, such as fire,
explosions, smoke, water, and air. Basically, this modeling uses mesh-based
methods such as finite volumes and finite elements, or mesh-free methods based
on particles, such as MPS and SPH. A common question among particle techniques
is the problem of boundary between fluids, since there is no general and
satisfactory solution available. The objective of this work is to propose a new
technique for the visualization of the boundary between fluids using the SPH
method (Smoothe-particle Hydrodynamics) and the Marching Cubes algorithm. Tests
are made to investigate the comptational costs involved in the generation and
rendering of the
boundary mesh.
[09_MSc_rodrigues]
Thoran Araguez RODRIGUES.
Estudo comparativo de estratégias de classificação de
páginas Web. [Title in English: A comparative study of Web page classification
strategies]. M.Sc. Diss. Port. Presentation: 03/03/09 82 p. Advisor: Eduardo
Sany Laber.
Abstract: The amount of information on the Internet increases every day. Even
though this proliferation increases the chances that the subject being searched
for by an user is on the Web, it also makes finding the desired information much
harder. The automated classification of pages is, therefore, an important tool
for organizing Web content, with specific applications on the improvement of
results displayed by search engines. In this dissertation, a comparative study
of different attribute sets and classification methods for the functional
classification of web pages was made, focusing on 4 classes: Blogs, Blog Posts,
News Portals and News. Throughout the experiments, it became evident the best
approach for this task is to employ attributes that come both from the structure
and the text of the web pages. We also presented a new strategy for extracting
and building text attribute sets, that takes into account the different writing
styles for each page class.
[09_MSc_rodrigues]
Vinicius Lopes RODRIGUES. Visualização de dados geográficos vetoriais sobre
terrenos em multi-resolução [Title in English: Vectorial geographic data
visualization on multi-resolution terrain]. M.Sc. Diss. Port. Presentation: 21/12/09. p. Advisor:
Waldemar Celes Filho.
Abstract: Not available yet.
[09_MSc_dantas]
Vitor Cavalcanti DANTAS. Algoritmos para problemas de programacao de horarios de
cursos pos-matricula. [Title in English: Algorithms for post enrollment-based
course timetabling]. M.Sc. Diss. Port. Presentation: 13/04/09 90 p. Advisor:
Marcus Vinicius Soledade Poggi de Aragão.
Abstract: Timetabling Problems have been widely studied, given its practical and
theorical relevance. Most of this variations belong to the NP-Hard class of
problems. In general, it is about allocation of material and human resources
in time and space, aiming to optimize some set of defined objectives. In
University Course Timetabling, for example, the objective might be the
satisfaction of professors and the academic performance of students. In the last
years, the
formulations for timetabling problems proposed by the International Timetabling
Competition (ITC) have been widely adopted. The predominance of meta-heuristics
and local search-based methods is remarkable among the recently proposed
approaches. The objetive of this thesis is to propose algorithms for the Post
Enrolment-based Course Timetabling Problem of the ITC, focusing on Mathematical
Programming-based heuristic methods. Among the Mixed Integer Linear Programming
models that we propose for this problem, we highght the one based on the
Asymetric Representatives Formulation for the Graph Coloring Problem. We explore
the application of the Local Branching heuristic and we propose a Column
Generation solution procedure, as an attempt to handle the proposed models,
given that the complexity of such models poses a challenge for currently
available Mixed Integer Linear Programming solvers.