02/06/2020
The Human Resources Strategy for Researchers

PhD contract in the field of Computer science financed during 3 years by the University Clermont Auvergne

This job offer has expired


  • ORGANISATION/COMPANY
    Université Clermont Auvergne
  • RESEARCH FIELD
    Computer science
  • RESEARCHER PROFILE
    First Stage Researcher (R1)
  • APPLICATION DEADLINE
    28/06/2020 00:00 - Europe/Brussels
  • LOCATION
    France › AUBIERE
  • TYPE OF CONTRACT
    Temporary
  • JOB STATUS
    Full-time
  • HOURS PER WEEK
    35 H
  • OFFER STARTING DATE
    01/10/2020
  • REFERENCE NUMBER
    UCA/ANR/011
  • IS THE JOB RELATED TO STAFF POSITION WITHIN A RESEARCH INFRASTRUCTURE?
    Yes

Subject: RnX – Deep Neural Networks and Explainability

Supervisor: MEPHU NGUIFO Engelbert

Laboratory: LIMOS

Email and phone: engelbert.mephu_nguifo@uca.fr, 0473407629

Co-advisor(s): FALIH Issam (UCA), TSOPZE Norbert (U.Yaoundé1)

Abstract (up to 10 lines):

The RnX project is interested in explainability in deep neural networks. Deep Neural networks

(DNN) have experienced significant progress in predictive capacity in recent years. But these

neural models are still treated as "black boxes" because the explanatory aspect mantatory to

critical areas (military, health, nuclear, etc.) has not experienced the same progress. The

explainability of a model consists in associating with the latter a component making it possible

to describe its operation in a manner understandable to the user.

The problem of explainability of deep neural networks is an ongoing research question, and

most of the work relates largely to convolutional neural networks (CNN) and for the majority

they are oriented towards the visualization by projection of the filters obtained from the

convolutions on the input image. This makes it possible to locate the area of the image which

has greatly contributed to the class, thus making it possible to explain the output from the

network. Despite these efforts, the explainability of deep neural models remains an open

problem.

The RnX project proposes to support the state of the art and to develop an explainability

method in deep neural networks, based on a hybrid approach coupling a reverse engineering

technique and a technique using domain knowledge to develop the deep neural network.

Skills:

- Qualification: Master or equivalent in computer science.

- Solid Knowledge of in Machine Learning, and Applied Mathematics. Knowledge on

Deep Neural Networks would be considered, but not strictly mandatory.

- Looking for students with following profiles: Curiosity and openness; Interaction with

other researchers; Autonomy; Taste for experimentation. Innovative.

- Good knowledge of the English language is mandatory (written and spoken).

Knowledge of the French language is not strictly required.

Keywords:

Machine Learning, Neural Networks, Deep Learning, Explainability

Description (up to 1 page):

The RnX project is interested in explainability in deep neural networks. Deep Neural networks

(DNN) have experienced significant progress in predictive capacity in recent years. But these

neural models are still treated as "black boxes" because the explanatory aspect dear to critical

areas (health, military, nuclear, etc.) has not experienced the same progress. The explainability

of a model consists in associating with it a component which makes it possible to describe its

operation in a manner understandable to the user. The problem of explainability of deep

neural networks is a topical subject, and most of the work relates largely to convolutional

neural networks (CNN) and for the majority they are oriented towards the visualization by

projection of the filters obtained from the convolutions on the input image. This makes it

possible to locate the area of the image which has greatly contributed to the class, thus making

it possible to explain the exit from the network. Despite these efforts, the explicability of deep

neural models remains an open problem because many aspects are rarely discussed in the

literature:

- Other types of deep neural networks: for example recurrent neural networks (RNN) or

deep belief networks (DBN) or even auto-encoders. Unlike CNNs which are mainly

based on image matrices, these other variants take as input data vectors not always

adapted to convolution.

- Units other than convolutions: for example RNNs have other forms of units generally

combining linear combination and introduction of non-linearity, for which it is not easy

to project the characteristics learned as with convolution units.

- Other data formats: spatio-temporal data, sequence data, graphs are rarely addressed

in work on explainability. These types of data are generally processed by other models

than CNNs and are not easily adaptable to visualization.

- Knowledge transfer: The question may arise as to whether the knowledge stored in

the network after learning can be extracted and used to improve human knowledge of

the phenomenon being treated, especially in situations where the machine has proven

to be superior to a person.

- Application of algorithms designed for conventional neural networks: Many algorithms

have been proposed for the explainability of shallow neural networks. These

algorithms produce a set of rules to describe the operation of the network. Adapting

these algorithms to deep neural networks may help explain the models that process

other types of data than images, and which are based on other types of units than

convolutions.

- Explainability assessment: the concept of explainability is still very vague. It is difficult

to compare the results obtained from two explainability algorithms on the same data.

This is explained by the fact that evaluation formulas are rare.

The RnX project proposes to study one aspect of current limits, and to develop a method of

explainability in deep neural networks, based on a hybrid approach coupling a reverse

engineering technique and a technique using domain knowledge to develop the deep neural

network.

Références (up to ½ page):

M. W. Craven, and J. W. Shavlik: Using sampling and queries to extract rules from trained

neural networks, Machine Learning: Proceedings of the Eleventh International Conference, San

Francisco, CA, USA (1994).

A. Jacovi, O. Sar Shalom, and Y. Goldberg: Understanding convolutional neural networks for

text classification, in: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and

Interpreting Neural Networks for NLP, Association for Computational Linguistics, (2018), pp.

56-65.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi: A survey of

methods for explaining black box models, ACM Computing Survey. 51 (2018) 93:1-93:42.

W.J. Murdoch, et al. “Definitions, Methods, and Applications in Interpretable Machine

Learning.” Proceedings of the National Academy of Sciences 116.44 (2019): 22071–22080.

Crossref. Web.

L. Tiogning Kueti, N. Tsopzé, C. Mbiethieu, E. Mephu Nguifo, and L. P. Fotso: Using Boolean

factors for the construction of an artificial neural networks. Int. J. General Systems 47(8): 849-

868 (2018)

G. Towell, and J. Shavlik: The extraction of refined rules from knowledge based neural

networks, Machine Learning, Vol 131 (1993) pp 71-101.

N. Tsopzé, E. Mephu Nguifo, and G. Tindo: Towards a generalization of decompositional

approach of rule extraction from multilayer artificial neural network. IJCNN (2011): 1562-

1569.

How to candidate?

Send by email to Engelbert Mephu Nguifo (engelbert.mephu_nguifo@uca.fr) :

- a CV,

- a motivation letter,

- and your marks in Bachelor and Master studies.

References/recommendation letters are also welcome.

Benefits

 

 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 

 

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

Eligibility criteria

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailCop

 
What do you want to do ?

New mailCo

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailC

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailCopy

 
What do you want to do ?

New mailCopy

Selection process

 

 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

Additional comments

 

 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

Offer Requirements

  • REQUIRED EDUCATION LEVEL
    Other: Master Degree or equivalent

Skills/Qualifications

 

 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

Specific Requirements

 

 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

 
 
What do you want to do ?

New mailCopy

Work location(s)
1 position(s) available at
Laboratory of Informatics, Modeling and Optimization of Systems (LIMOS)
France
Région Auvergne Rhône-Alpes
AUBIERE
63178
Campus Universitaire des Cézeaux TSA 60125 - CS 60026 1, Rue de la Chebarde

Open, Transparent, Merit based Recruitment procedures of Researchers (OTM-R)

Know more about it at Université Clermont Auvergne

Know more about OTM-R

EURAXESS offer ID: 528438

Disclaimer:

The responsibility for the jobs published on this website, including the job description, lies entirely with the publishing institutions. The application is handled uniquely by the employer, who is also fully responsible for the recruitment and selection processes.

 

Please contact support@euraxess.org if you wish to download all jobs in XML.