Research topic ideas

Reinforcement Learning to reduce energy consumption (Dr Luis Miralles)
Machine Learning (ML) has become very popular in the last decades because it allows creating models using previous information with little human intervention. ML has been widely used for classifying and predicting values. Nevertheless, this project is focused on a branch of ML called Reinforcement Learning (RL). In RL there is an agent that moves from one state to another state getting a positive or a negative reward. The agent has to learn which are the best actions for each state by maximizing the total rewards from the initial to the final state. This is called the optimal policy. RL has made breakthrough achievements in the last few years such as beating professional players in the game of GO which remained as one of the biggest challenges in Artificial Intelligence. The project is aimed at optimizing the best possible combinations for reducing the energy bill of an industrial company. To this end, we will simulate the energy consumption of a certain company and then, we will implement an RL algorithm like Deep Q-Learning to find out which are the best actions at each point to reduce the energy consumption as much as possible.

Reinforcement Learning to optimize the Governments actions to mitigate the COVID pandemic(Dr Luis Miralles)
Whenever a county is hit by a pandemic, Governments must take the best actions to guarantee the public health of the country but also reduce the negative impact of the pandemic on the economy. The Government can define different phases that can go from more to less restrictive. But it is crucial to develop tools to help them in taking the best planning for the phases given the priorities of the government. In this work, we develop a model based on Reinforcement Learning (RL) to help governments to combat viruses’ pandemics like the one of COVID. To this end, we will implement an SEIR model to represent the spread of the virus on the population. We will compare our approach based on RL with other optimization approaches such as Genetic Algorithms or Ant Colony. We expect to provide new insights on this topic to help countries managing pandemics in a better way.

A human-in-the-loop solution for the construction of explainable AI systems (Dr. Lucas Rizzo)
The recent increased use of “black-box” systems in Artificial Intelligence has led to an issue of trustworthiness and understandability between end-users operating such systems. The inability to explain the inferences of these systems severely affects their applicability. Theoretically, approaches exist that enable interpretability and understandability of “black-boxes” systems by humans. However, these are difficult to be implemented in practice due to the high amount of deductive, declarative knowledge required by experts. This project is aimed at tackling this problem by taking advantage of the precision and accuracy that can be achieved by machine learning in modelling tasks and the explanatory capacity of defeasible argumentation, a novel emerging paradigm in AI. This paradigm is a technique for modelling reasoning under uncertainty by using the notions of arguments and contradictions among them, routinely employed in human reasoning activities.The proposal is to automatically extract arguments, essentially propositional rules, from “black-box” models learnt with machine learning. In turn, potential contradictions (attacks) among extracted arguments are proposed to be heuristically identified. This network of arguments and attacks can be validated by one or more end-users, maintaining the interpretability and understandability of the proposed solution.  At the same time, quantitative inferences can be produced by this same network of arguments with transparent algorithms from the literature of defeasible argumentation. It is expected that these inferences will present similar predictive power to the exploited machine learning models.

Visualising latent space in autoencoders for Gravitational Waves detection (Dr. Luca Longo)
Gravitational waves are disturbances in the curvature of spacetime, generated by accelerated masses, that propagate as waves outward from their source at the speed of light. Detecting these types of “rare” events is of extreme importance. This project will focus on the construction of a Deep Learning autoencoder for the automatic identification of gravitational waves. In detail, it will focus on the construction of a visualisation layer of the latent space (learned by the autoencoder) that will help human understand the salient hidden patterns inherent to gravitational waves.

Explainable Artificial Intelligence: Wrapping machine learning models with argumentative capabilities (Dr. Luca Longo)
Argumentation has advanced as a solid theoretical research discipline for inference under uncertainty within Artificial Intelligence. Scholars have predominantly focused on the construction of argument-based models for demonstrating non-monotonic reasoning adopting the notions of arguments and conflicts. However, they have marginally attempted to examine the degree of explainability that this approach can offer to explain inferences to humans in real-world applications.
This proposal concerns the application of argumentation to wrap machine learning models with argumentative capabilities.

Machine learning for the creation of Computational models of trust (Dr. Luca Longo)
Computational Trust applies the human notion of trust to the digital world that is seen as malicious rather than cooperative.
Trust factors could be promising in assessing the trustworthiness of virtual identities interacting in an open environment. This proposal concerns the application of computational trust to online communities such as Wikipedia, Stackoverflow where content is created by humans collaboratively. In details, the goal is to create a model of trust to evaluate the trustworthiness of information in online communities. This can be done employing machine learning, either unsupervised or supervised.

Machine learning for mental workload modeling (Dr. Luca Longo)
Past research in HCI has generated a number of procedures for assessing the usability of interacting systems. In these procedures there is a tendency to omit characteristics of the users, aspects of the context and peculiarities of the tasks. Building a cohesive model that incorporates these features is not obvious. A construct greatly invoked in Human Factors is human Mental Workload. Its assessment is fundamental for predicting human performance. This proposal focused on empirical research on the investigation of which factors mainly compose mental workload and their impact on task performance. A user study was carried out with participants executing a set of information-seeking tasks over three popular web-sites (a dataset is ready). The goal is to investigate if Supervised Machine Learning techniques based upon different learning strategy can be successfully employed for building models of mental workload aimed at predicting classes of task performance and extract those factors that contribute the most to achieve this goal.

Deconvolutional neural networks for visualisation of representations (Dr. Luca Longo)
Deconvolutional networks are convolutional neural networks (CNN) that work in a reversed process. Deconvolutional networks strive to find lost features or signals that may have previously not been deemed important to a CNN’s task. A signal may be lost due to having been convoluted with other signals. The deconvolution of signals can be used in both image synthesis and analysis. This project will aim to build a deconvolutional neural network that work with EEG signals, and to visualise CNN learnt representations by back-projecting them to the original input space and allowing a deeper interpretation.

Extracting symbolic rules from Recurrent Neural Networks (Dr. Luca Longo)
Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. This project will focus of creating a solution to the above problem with EEG data.

Autoencoders for noise reduction in EEG signals (Dr. Luca Longo) 
An Autoencoder is a neural network, an unsupervised learning algorithm which uses back propagation to generate output value which is almost close to the input value. Different types of autoencoders exist: sparse, stacked, variational. Designing an architecture is itself a challenge as it includes the design of hidden layers that can be convolutional, dense and recurrent as well as the number of neurons and various hyper parameters. This project will focus on this challenge and in particular to the development of an autoencoders for noise reduction in EEG signals. This will be compared against the well-known Principal Component Analysis (PCA) algorithm.

Subjective mental workload and objective indicators of activity with Jquery (Dr. Luca Longo)
The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data.
For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). This study aims to assess subjective mental workload over web-based tasks and investigate its correlation with objective indicators of tangible activity of online users in the browser (mouse movement, scrolling, clicking, focus – using jQuery and Javascript).

Mental workload, learning and tag clouds (Dr. Luca Longo)
Mental workload is probably the most invoked concept in Ergonomics. In a nutshell it can be intuitively defined as the amount of mental work necessary for
a person to complete a task over a given period of time. Several MWL assessment procedures have been proposed in the literature. Example includes the subjective instruments NASA task load index and the Workload Profile. Similarly, learning can be quantifies in several ways. The aim of this project is to adopt tag clouds visualization techniques to quantify learning before and after teaching sessions and to relate it to mental workload indexes.

On the relation between cognitive load theory and learning’ (Dr. Luca Longo)
Cognitive Load Theory (CLT) has been proposed as a means for designing instructional material and delivering it in a way that learning is maximized. Under this perspective, it is evident that CLT becomes extremely important in third level education. An adequate mental workload is a condition in which there is a balance between the intrinsic difficulties of a task/topic, the way it is presented to the learners (extraneous load) and the amount of effort performed by the learner to allocate the new knowledge into the old one (germane load). However, the experience of cognitive workload is not the same in every single individual, varying according to cognitive style, education, and upbringing. The aim of this project is to use CLT jointly with computational methods for mental workload representation and assessment to quantify the mental load imposed to learners by teaching activities and instructional material. The assumption is that if learners experience an optimal cognitive load of a class, their learning is optimized. It turns out that, if their cognitive load can be quantified after any teaching session, this quantity can be used as a form of empirical evidence to select the most effective teaching method and/or instructional material in a given context and for a given audience.
The projects aims to investigate the relationship between quantified cognitive load of learners and their quantified learning during teaching sessions.

On the relationship between usability and mental workload of interfaces (Dr. Luca Longo)
The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data. For this reason, web-designers have tended to adopt cheap subjective usability assessment techniques for enhancing their systems. However, there is a tendency to overlook aspects of the context and characteristics of the users during the usability assessment process. For instance, assessing usability in testing environments is different than assessing it in operational environments. Similarly, a skilled person is likely to perceive usability differently than an inexperienced person. For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). Several MWL assessment procedures have been proposed in the literature but a measure that can be applied for web-design is lacking. Similarly, recent studies have tried to employ the concept of MWL jointly with the notion of usability. However, despite this interest, not much has been done to link these two concepts together and investigate their relationship.
The aim of this research study is to shed light on the correlation of these two concepts and to design a computational model of mental workload assessment that will be tested with user studies and empirically evaluated in the context of web-design.

Enhancing Decision Making with Argumentation Theory (Dr. Luca Longo)
Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing decision-making. This project aims to study the impact of defeasible reasoning and formal models of argumentation theory for supporting and enhancing decision-making. Multiple fields of application will be tested against state-of-the-art approaches: decision-making in health care, multi-agent systems, trust and the Web.

Enhancing the representation of human Mental Workload with Argumentation Theory and defeasible reasoning (Dr. Luca Longo)
Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing knowledge-representation. This project aims to study the impact of defeasible reasoning and formal models of AT for enhancing the representation of the ill-defined construct of human mental workload (MLW), an important interaction design concept in human-computer interaction (HCI). The argumentation theory approach will be compared against other knowledge-representation approaches.

Computational Trust and automatic assessment of trust of online information (Dr. Luca Longo)
The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is an emerging discipline within Artificial Intelligence. It is aimed at increasing the reliability, trust and performance of electronic communities and online information. Computer science has moved, in the last decades, from the paradigm of isolated machines to the paradigm of networks and distributed computing. Similarly, Artificial Intelligence is quickly shifting from the paradigm of isolated and non-situated intelligence to the paradigm of situated, collective and social intelligence. This new paradigm as well as the emergence of the information society technologies are responsible for the increasing interest on trust and reputation techniques applied to public online information, communities and social networks. This study is aimed at investigating the nature of trust, the factors that affect trust of online information and the design of a computational model for assessing trust. This will be evaluated in empirical terms with user studies involving several online web-sites and people.

Online Community dynamics and behaviour (Dr. Pierpaolo Dondio)
This research investigates the behaviour and dynamics of online communities, including models of reputation and trust. It focuses specifically on the investigation of how financial online communities react to market crashes and the predictive power of such communities. This applied research makes use of a multidisciplinary set of techniques such as data- and text-mining techniques, along with econometric approaches, network analysis, agents, user modelling and trust.

A data analysis approach on multi-language versions of Wikipedia (Dr. Pierpaolo Dondio)
Is Wikipedia (positively or negatively) biased against some cultures?
Wikipedia is present in more than 250 languages. It is reasonable to think that each Wikipedia version represents the point of view of a specific culture and country. Even when a common topic is presented
across multiple versions (written in different languages), it is reasonable to think that the importance and emphasis given to the topic depends on the cultural background of the writers of the articles. This
variable importance could be negligible for some neutral topics, but it could be quite significant for controversial topics or topics strongly representative of cultures. For instance, it is likely that an article
about Oscar Wilde is better described in the English version than in the Russian, while a major Russian writer such as Checkov or Tolstoj could have more relevance in the Russian Wikipedia, likely with a
higher importance than Shakespeare. However, Wikipedia guidelines underline how Wikipedia articles should follow a neutral point of view,
where every topic is presented in a fair, balanced and objective ways.
Therefore, a question aries: are the differences among version of Wikipedia still compatible with a neutral point of view or they are a consequence of a bias towards some topics and cultures? The project aims to collect a set of topics common to different Wikipedia version, and quantify the bias that each version has towards its own culture or other cultures. After having selected a group of Wikipedia version, a selection of articles will be defined and extracted from a Wikipedia dump. The techniques used will include social network analysis and data analysis. No text analysis is required. The core idea will be to design and quantify a level of importance of an article in a each Wikipedia version, and then check if the difference of this level of importance across multiple versions is statistically significant or it is tolerable.

Using StackOverflow for Teaching and Learning. (Dr. Pierpaolo Dondio)
In the field of Computing, online communities of practice such as Stack Overflow have great potential for educational purposes. Michael Staton suggested technology firms now consider Stack Overflow to be “the new Computer Science Department where people go to learn”. A recent survey conducted among Irish lectures and students revealed how more than 75% of lecturers have already used Stack Overflow, 82% admit to have learned something from it, half of them think SO could be used in teaching Computing, 30% thinks Stack Overflow explains concepts
better than a University textbook, 35% of students think that SO always or often explains concepts better than their lecturers. However, both students and lecturers complained about the lack of structure and organization of the StackOverflow material. If this material has to be used for education (and not only for quick reference), it must be better organized, good stuff should be filtered, it should be easier to navigate and the learner should understand and be guided through the pre-requisites of each question. Computing is a highly interconnected discipline. For instance, writing a simple PHP script requires an understanding of HTML, DB connection, SQL, and bash scripting. If we are to bear learner in mind, the most crucial thing is to provide her/him with a compass to safely navigate the learning material. Moreover, not all the content of StackOverflow is suitable for learning. Some of the Q&A are quick cut&paste code fixing, while other pages are complete and sound discussions and presentations of a Computer Science topic that can have a great potential for educational purposes. The aim of the project is to provide tools to leverage and better exploit the learning potential of Stack Overflow. The following are few ideas that the project might explore. First, using text mining and data analytics techniques, the project aims to separate the more interesting material for teaching purposes from the pure technical tips and tricks content. Second, using text analytics and visualization techniques, the project could visualize a network of Computer Science concepts emerging from the content of StackOverflow, and show the links and pre-requisite between them. The network can be used by a learner as a compass to understand the links
between concepts, to understand which concepts are the fundamental ones and to navigate among the questions in a meaningful way.
Finally, given a topic (such as “database normalization” or “file reading in C” or “inheritance in Java”), the project could develop a tool that will select automatically a set of Q&A relevant to the topic and with didactical value, that can be used by a learner for his practice or by a lecturer to design a tutorial/lab session. In this scenario, the question can be posed to the learner keeping the answer hidden, and reveal the answer to the learner only when appropriate.

Can Big Data platforms alleviate the Utility Problem in Case-based Reasoning (Dr. Sarah Jane Delany)
Case-based Reasoning (CBR) is a technology that solves new problems by using experiences of similar past problems. A CBR application requires a case-base of past problems and solutions (cases) to be maintained; larger case base give greater problem coverage and better solution quality but also can degrade system efficiency and performance. A variety of solutions have been proposed for the utility problem including different case retrieval approaches, better case-base indexing etc. The objective of this project is to consider whether the advent of big data platforms such as Hadoop can impact or even eliminate the utility problem in CBR.

Investigations of wikipedia edits to identify trending (Dr. Sarah Jane Delany)
The Wikimedia Foundation publishes data dumps of Wikipedia on a regular basis (see https://meta.wikimedia.org/wiki/Data_dumps). The content of these dumps provide text and metadata of current and all revisions of all pages as XML files. These dumps can and have been used for a variety of research including vandalism detection, author reputation and quality evaluation and measurement.
The idea in this project is to use the edits to identify trending topics and to evaluate these against other real time streaming data such as twitter.

Measuring Bias in Contextual Embeddings (Dr. Sarah Jane Delany)
Word embeddings have been considered one of the biggest breakthroughs of deep learning for natural language processing (NLP).  They are a learned representation for text where words which have the same meaning have similar representations.  Recent advances in learning embeddings have included learning the context of a word (contextual embeddings) which will produce a different representation for the work bank depending on whether it is a river bank or a commercial bank.   Recent work into identifying bias (gender, religious, ethnicity) in word embeddings can compare the level of bias in static (non-contextual) word embeddings learned from different sources and using different techniques.  The aim of this project would be to look at using the WEFE framework to compare the level of bias between static and contextual word embeddings. 


Investigating the Gender Bias in Multi-lingual Word Embeddings (Dr. Sarah Jane Delany)
Recent work in identifying gender bias in natural language processing (NLP) systems has focussed on looking at the level of gender bias in word embeddings.    APIs have recently been made available including the Fair Embedding Engine and Word Embedding Fairness Evaluation to measure the level of gender bias in word embeddings and to debias word embeddings.  The aim of this project would be to look at using the techniques available in these frameworks to consider the level of gender bias in multi-lingual word embeddings .   This would cover both mono-lingual embeddings (embeddings for a single language, not English) and multi-lingual embeddings (embeddings that cover more than one language).    One example of these types of embeddings is available here.

Video classification using sound features (Dr. Susan McKeever)
There is a huge volume of video uploaded to the WWW each day. The problem of viewing and classifying this is an ongoing one. Sound is very informative about the nature and type of video. e.g. Sports videos, with crowds roars will have their own distinctive pattern. This project is to investigate and test the ability to classify video using the video sound. It will use machine learning techniques to develop a classifier. The classifier will use sound features of the video, based on analysing and picking out features in the digital sound signal.

Formal Language Parsing (Damian Gordon)
This project would be to implement a parser for the Z Specification language, using the Haskell programming language. The parser should be compliant with the official Z standard as far as possible.

Swarm Intelligence (Damian Gordon)
This project is to investigate and develop a model of swarm intelligence. The basic architecture of a swarm is the simulation of collections of concurrently interacting agents: with this architecture, you can implement a large variety of agent based models. One interesting application is the modelling of crowd behaviour in emergency situations

Gait Anaylsis (Damian Gordon)
This project will focus particularly on kinematics (measurement of the movement of the body in space) using simple computer vision techniques to identify polynominal splines and create a “stickman” figure which can be overlayed beside the original actor using augmented reality techniques.

Using Puppy Linux to recover files (Damian Gordon)
Puppy Linux is a distribution that provides users with a simple environment, and can be used to recover files (as long as they aren’t NTFS). This project suggests that you extend the functionality of Puppy Linux to recover NTFS files.

Detecting Invisible Motion (Damian Gordon)
This project will look at MIT research at the intersection of vision and graphics who created an algorithm that offers its users a new way of looking at the world. The technique which uses an algorithm that can amplify both movement and colour, can be used to monitor everything from breathing in a sleeping infant, to the pulse in a hospital patient. Its creators, led by computer scientist William Freeman, call it “Eulerian Video Magnification” https://www.youtube.com/watch?v=3rWycBEHn3s

Building a visual knowledge base, for unsupervised image recognition (Dr. Susan McKeever)
Knowledge bases are key to being able to store and reuse knowledge. Examples are less structured bases such as Wikipedia, or more structured ones such as DPPedia or ConceptNet. This project is about building a visual knowledge base from  a labelled image set, using object recognition – and building an associated knowledge base.  The concept of image recognition can then be proved by matching unseen images through the knowledge base to see if they are recognised. A knowledge base that doesn’t appear to exist yet is one based on visual images. To explain the concept here, let’s build up the scenario.  For example, if a human were to look at a picture of a street, they would be able to name objects in the images such as car, lamppost, house – and so on.  The associated knowledge is that a street “contains” or is associated with cars, houses etc.  The human uses this type of knowledge to recognise any street image.  If we then pass many images through this process, useful knowledge  can be generated .  This knowledge, if captured, could then be used to recognise images in an unsupervised way – based on its content.  E.g. unlabelled street image – contains cars/ street etc – so matches to “street”. In turn, this could support recognition of unseen images, similar to “zero shot learning”.

Profiling from an image (Dr. Susan McKeever)
Profiling from an image Automatically extract attributes about a person based on an image: Age, gender, Height (depending on image), weight, dressed in…, . The project will use both image processing and machine learning skills

Creating more generalisable machine learning models through hierarchical labelling (Dr. Susan McKeever)
Predictive models are created using labelled datasets. Once trained, the model can then be used to to predict the associated classes of the labels. The purpose of this project is to examine how we can use label hierarchies to create models that predict both detailed “classes”, but which can, if necessary, predict a higher level category if the model can’t or doesn’t need to distinguish the detail. For example, a predictive model that can detect objects in an image – rose, daisy, hyacinth, chair, desk, table – can when presented with a new image of an unknow plant, detect that it is a “flower” even though it does not match any of the detailed “classes” it has been trained on . This approach will allow for models to use knowledge they have already gained about “groups” of labels and use it to classify unknown examples.

Universal Design Code Parser (Damian Gordon)
The Seven Principles of Universal Design, modified by O’Leary and Gordon (2011) are: 1. Equitable Use: The design is useful and marketable to people with diverse abilities. 2. Flexibility in Use: The design accommodates a wide range of individual preferences and abilities. 3. Simple and Intuitive Use: Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level. 4. Perceptible Information: The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities. 5. Tolerance for Error: The design minimises hazards and the adverse consequences of accidental or unintended actions. 6. Use of Design Patterns: To make the code easier to understand, and easier to extend, use pre-exiting patterns. 7. Consider the User: Make the user the centre of the whole process. Understand the range of users of the system.
This project seeks to develop a parser that will scan Python code to determine how closely the code adheres to the above principles. NOTE: This is not looking at the user interface produced by code to assess it’s universal designedness, it is looking at the code itself, and seeing how universal designed the code is.

The Psychogeography Toolkit (Damian Gordon)
Psychogeography is defined as the “the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals.” One of the typical approach is to draw a large circle at random on a map, and travel along that circle, commenting on what you see, what you hear, how it feels, (e.g. look at the metal drains, are there dates on them? look at shapes of the buildings, and how the telephone wires and electricity wires snake around them, etc.). This project will seeks develop a random route generator (circular), and create a means by which the experiences can be recorded, photographed, etc. and automatically formed into a webpage.

Shakespeare Apocrypha (Damian Gordon)

The Shakespeare Apocrypha is the name given to a group of plays (e.g. Sir Thomas More, Cardenio, and The Birth of Merlin) that have sometimes been attributed to William Shakespeare, but whose attribution is questionable for various reasons. Using Stylistic statistical-based metrics, e.g. Zipf analysis, Sentence Length, Sentence structure, words used, tense, infrequent n-gram occurrences, active vs. passive voice, etc. and the development of other suitable metrics as part of the project, similarities will be measured between the canonical tales and the apocryphal ones.

Building Learning Objects (Damian Gordon)
This project is to investigate and develop a series of Learning Objects (a unit of educational content delivered via the internet), using eith the IMS Content Packaging or the SCORM (Sharable Content Objective Reference Model) standard. As well as having the standard learning object parameters, the learning objects for this project will be aware of how learning style can effect presentation means.

NewSpeak Text Filter (and Translator) (Damian Gordon)
One of themes of George Orwell’s 1984 was that the government was simplifying the English language (both vocabulary and grammar) to remove any words or possible constructs which describe the ideas of freedom and rebellion. This new language is called Newspeak and is described as being “the only language in the world whose vocabulary gets smaller every year”. In an appendix to the novel Orwell included an essay about it, and the basic principles of the language are explained. The objective of this project is develop a text filter that will take in normal text, and convert it into Newspeak. An initial system will simply change the words in the text to their equivalent in Newspeak, e.g. “bad”, “poor”, “lame” all become “ungood”; “child”, “children”, “boy”, “girl” become “young citizens”; “quite”, “rather”, “kind of”, “kinda” become “plus”. From there the next stage is to investigate the more fundemental translation process, whereby the grammar and structure of the text is changed to the style as outlined by Orwell.

Knowledge Graphs for Machine Learning (Bojan Bozic)
State-of-the-Art analysis and report on current knowledge graph implementations and frameworks which could be considered as a preprocessing step to benefit data preparation for machine learning tasks. The project would include the implementation of a use case and validation with an open source data set. The goal is to show that usage of a knowledge graph improves accuracy for predictions and provides better benchmarks compared to the baseline.

Ontology-driven Fake News Detection (Bojan Bozic)
The idea is to use a knowledge base and model characteristics of fake news in order to detect them in free text. The approach includes building a knowledge base with one of the existing implementations (e.g. virtuoso, alegro graph, etc.) or building a simpler model from scratch in prolog. The knowledge base can be used to validate statements and detect incosistancies which can be classified as fake news. A dataset will be used to validate the implementation.

Neural Networks and Logic Rules for Semantic Compositionality (Bojan Bozic)
How can we combine Neural Networks with Logic Rule-based Systems built on Description Logic and which benefits could be gained by extending NN with DL? This could be a research report on current attempts to define rules in NN in order to reduce complexity and improve predictions or a proof of concept implementation to show how to use logic rule restrictions or a semantic rule-based language such as SHACL or ShEx in simple NNs.

Decentralised multi-agent deep reinforcement learning (DMARL) (Basel Magableh)
Context uncertainty in distributed self-adaptive systems requires a multi decentralised adaptation agents that could possibly adapt to changes in distributed systems. Learning in decentralised multi-agent environment is fundamentally more challenging than the employment of single agent. DMARL faces serious problem like having non-stationary state, high dimensionality of the observation space, multi- agent credit assignment, robustness, and scalability. This project investigates the possibility of employing DMARL in self-adaptive microservices architecture. Possibly this projects requires good understanding of Ray framework a scalable distributed reinforcement learning framework (https://ray.readthedocs.io/en/latest/rllib.html) or Intel Coach (https://github.com/NervanaSystems/coach).

Unsupervised realtime anomaly detection towards self-healing microservices architecture (Basel Magableh)
Because of the uniqueness of streaming data found in distributed systems, the design of self-healing microservices architecture should meets the following requirements:
(1) The system should be able to operate over real-time data (no look-ahead). No data engineering is possible as the data is collected at realtime.
(2) The algorithm must continuously monitor and learn about the behaviour of the microservices cluster.
(3) The algorithm must be implemented with an automatic unsupervised learning technique, so it can continuously learn new behaviour and anomalies in real-time.
(4) The algorithm must be able to adapt the changes of the operating environment and provides adaptation strategy that can be orchestrated over the cluster nodes.
(5) The algorithm should be able to detect anomalies as early as possible before the anomalous behaviour is interrupting the functionality of the running services in the cluster.
(6) The proposed model should minimises the false positives (False Alarms) rate and the false negatives rate. If the system identifies a normal behaviour as an attack, this attempt should be classified as a False Positives (False Alarm).
(7) The proposed model should offer a high detection rate, better accuracy and a lower false alarm rate.
(8) The proposed model should offer consistence adaptation strategy, and preserve the cluster state and it should offer the architecture with a roll back (auto recovery) strategy in case the adaptation action failed.
(9) One important aspect of a self-healing Microservices architecture is the ability to i) continuously monitor the operational environment, ii) detects and observes anomalous behaviour, and iii) provides a reasonable policy for self-scaling, self-healing, and self-tuning the computational resources to adapt a sudden changes in its operational environment dynamically at rune-time.

Deep Spiking Neural Networks (SNN) (Basel Magableh)
Spiking Neural Networks (SNN) are a rapidly emerging research of data analytic. SNN is inspired from the brain process of sequential memory. SNN might be able to handle complex temporal or spatial data, in dynamic environments at low power and with high effectiveness and noise tolerance. The success of deep learning comes with cost of using brute-force algorithms and power hungry GPUs, in addition to the issue of slow model training and the limitation of each model to specific domain of MDP environments. SNN could get benefits from the advances made in evolution and cognitive neuroscience to be employed in the domain of IOT and multi sensors networks. This project aims to investigate the possibility to implement SNN in a simulated IOT platform such as CPUCARBON http://www.cupcarbon.com.

Activity-awareness in mobile computing (Basel Magableh)
Activity-awareness in mobile computing has inspired novel approaches for creating smart personalised services/functionalities in mobile devices. The ultimate goal of such approaches is to enrich the user’s experience and enhance software functionality. One of the major challenges in integrating mobile operating systems with activity aware computing, is the difficulty of capturing and analysing users generated content from their personal handsets/ mobile devices, without outweigh their privacy and securing the collected sensitive data. Although conventional solutions exist for collecting and extracting textual contents generated by users in mobile computing applications, these solutions are most unsatisfying when it comes to personal integrity of the user. All previously known conventional solutions comprises collecting the user’s generated content from various applications such as e.g. an email client and/ or Short Message Service (SMS).Unfortunately, all of those applications are introduced to the user after exposing and sharing his/ her personal data to a web services located outside the mobile device, e.g. in the cloud. In addition, the collected information is stored outside the user’s personal mobile device in some remote server.
These serious drawbacks make many users reluctant to use the described conventional solutions. However, there is still a request for personalised pro-active services functionalities. Activity-aware computing enables mobile software to respond proactively and effectively to user needs based on the contextual information found in the environment where they operate. The ultimate goal of activity-aware computing is to automatically extend the application behaviour/structure based on the activity being performed by the user or software components. In this project, we are investigating a model for collecting user-generated content from the mobile’s OS messaging loop, feeds the collected context information into an experience matrix based on the sparse distributed model. The model offers the device a runtime representation of the current context model, which can be used to predict the user activity.

Toolkit to Support Undergraduate Co-Design Team Projects (John Gilligan)
Co-design has its roots in the Participatory Design techniques developed in Scandinavia in the 1970s. Co-design reflects a fundamental change in the traditional designer-client relationship. A key tenet of co-design is that users, as ‘experts’ of their own experience, become central to the design process. Co-design is a multifaceted process with multiple stakeholders. Co-Design requires support for training participants in Co-Design Methods. It requires support for project management across the project lifcycle. It requires support to manage the deliverables of these projects, for example code sharing.This project addresses the development of a Co-Design toolkit to provide these supports across the processes of Co-Design. What are useful components of this Toolkit. Can their effectivness in assisting the development process be measured? Do they help meeting the learning outcomes of Team projects on Undergraduate computer Science courses.

Designing an effective formative assessment program for teaching AGILE Software development (John Gilligan)
Teaching AGILE software development faces many challenges especially in the development of approriate exercises and activities around different aspects of the methodology. For example what is the best way to introduce the different roles involved, such as Scrum Master, Scrum Coach Product Developer. How can the user feedback be embedded in the process. This Project looks at the design of a suite of supporting exercises built around the iteartions of a specific App development to realise the learning outcomes of a course on Agile development..

Developing an effective Audit Methodology for testing Web Accessibility (John Gilligan)
The EU Web Accessibility Directive of 2016 requires public bodies of memebr states to ensure their websites and apps are accessible to persons with disabilities. All websites created after that date will have to be accessible by 23 September 2019. Existing websites will have to comply by 23 September 2020. All mobile applications will have to be accessible by 23 June 2021. The Web Content Accessibility Guidelines of the W3c. org have also recently been upgraded to version 2.1 with new checkpoints related to cognitive challenges amongst others. This project looks at developing effective auditing strategies to ensure compliance with these guidelines using both automatic tools and manual processes.

Establishing the inclusiveness and fairness of Big Data Sets for machine learning and other applications (John Gilligan)
The issue of applications which are based on Big Data excluding those on the edges of this data has become a rising topic. For example CV screening programs which use Machine Learning can discriminate against those with disabilities or on grounds of gender becuae the data used doesnt sufficiently represent those populations., August bodies such as the World Economic Forum have highlighted this. IBMs Fairness 360 toolkit is an example of an initiative which looks at this problem.This project examines ways in which populations are excluded for example through outlier removal in ML pre-processing. It looks to develop metrics for inclusion similar to those which have been established for data fairness and to devlop ways to calculate these measures. It looks to develop algorithms for greater inclusion for these applications.

Augmented Reality Tourist Information System (John Gilligan)
Is Augmented Reality a viable technology for builiding usable Tourist Information systems. Can it be used effectively to combine geolocation andmulti modal content presentation to enhance the tourist experience?

Projects by Brendan Tierney can be found here:
http://b-tierney.com/projects/

Projects by Damian Gordon can be found here: http://damiansprojectideas.blogspot.com/search/label/MSc