In data lunedì 16 aprile e martedì 17 aprile le lezioni di basi di dati saranno tenute dal professor Paolo Tiberio.

L'orario delle lezioni sarà dalle ore 9 alle ore 11 per entrambe le giornate.

Wednesday, 21 March 2018 14:05

Apple CloudKit is hiring

Written by

Are you passionate about large-scale distributed and database systems? Would you like to work in a startup-like environment on cutting edge systems technology and have your work impact hundreds of millions of users around the globe?
Join us in designing and building Apple's next generation storage, infrastructure and cloud services!

The CloudKit team (part of Apple's iCloud) is looking for researchers and software engineers with the following qualifications:

- MSc or PhD in Computer Science or related field
- Experience in building systems and systems software development
- Expertise in one or more of the following areas: query optimization, database systems, distributed systems, machine learning
- Strong publication record is a plus

Positions are in Cupertino, Seattle, San Francisco, Boston.

More about CloudKit:
- VLDB'18 paper: http://www.vldb.org/pvldb/vol11/p540-shraer.pdf
- Videos, tutorials, and SDK: https://developer.apple.com/icloud/

To apply, please email CV to: This email address is being protected from spambots. You need JavaScript enabled to view it. or to This email address is being protected from spambots. You need JavaScript enabled to view it.

Lunedì 26/03/2018 al posto della lezione di laboratorio si terrà una lezione di esercitazioni in preparazione alla prima prova parziale.

Gli studenti con cognome A-L faranno lezione presso il laboratorio LINFA, quelli con cognome L-Z presso il laboratorio INFOMEC.

Monday, 12 February 2018 17:29

Ricevimento prof.ssa Bergamaschi anticipato

Written by

Si avvisa che il ricevimento della professoressa Bergamaschi che avrebbe dovuto tenersi giovedì 15/02/2018, è anticipato a mercoledì 14/02/2018 alle ore 16.00.

Il Premio CEI – Miglior Tesi di Laurea premierà tre Tesi di Laurea con un riconoscimento pubblico ed ufficiale e l’assegnazione l’assegnazione di un contributo in denaro di 2.000,00 euro per ciascun premiato.

Per maggiori informazioni visitare il sito https://www.ceinorme.it/it/news-main-it/comunicati-stampa/1050-premio-cei-miglior-tesi-di-laurea-ventiduesima-edizione-2017.html

PhD student opening at the Database Group, Department of Computer Science, University of Hong Kong (HKU)

The database group (Dr. Reynold Cheng) (http://www.cs.hku.hk/~ckcheng) is recruiting a PhD student in the area of database and data mining. The position is supported by HKU for developing inter-disciplinary research in using big data technologies to support intelligent transportation and smart city development.


Requirements:

* Four- or five-year undergraduate degree with honors
* Strong background in database systems, data mining, and machine learning; knowledge in transportation and spatial data management a plus but not necessary
* Good written and spoken English
* Strong programming skills
* Strongly motivated in research, with a desire to experiment and exploreÊ

HKU offers top research computing facilities and a competitive scholarship of at least 190K HKD per year. If you are interested in this position and believe that you qualify, please send your CV and a cover letter explaining how your skills and research interests fit into our area:Ê

This email address is being protected from spambots. You need JavaScript enabled to view it.

For general information about the PhD position, please refer to:
http://www.cs.hku.hk/programme/mphil-phd/

Please bear in mind that the PhD student position have to be filled before November 10, 2017, and the PhD will start on January 1, 2018. Please send your CV to me on or before Aug 31, 2017.

Martedì 6 Giugno dalle 20.30 l’Associazione ConoscereLinux ospita una serata (in Inglese) con Bartosz Milewski, un esperto internazionale di sviluppo software, programmazione funzionale e teoria delle categorie.

Per maggiori informazioni è possibile visitare la pagina dell'evento al seguente indirizzo http://conoscerelinux.org/2017/05/18/the-future-of-programming-con-bartosz-milewski/

Wednesday, 22 March 2017 15:43

Concorso Youth in Action for Sustainable Development Goals

Written by
Il concorso ha l’obiettivo di premiare e raccogliere le migliori idee in grado di favorire il raggiungimento, in Italia, degli Obiettivi di Sviluppo Sostenibile (SDGs), contenuti nell’Agenda 2030 approvata dalle Nazioni Unite.
Si tratta quindi di un’occasione per tutti gli studenti universitari e per i neolaureati sotto i 30 anni per mettersi in gioco e far valere il merito con la forza delle proprie idee: infatti, ai proponenti delle idee progettuali vincitrici verrà messo a disposizione uno stage retribuito dai Promotori e dai Partner del concorso.
 
Trattandosi di un concorso nazionale, e augurandoci la partecipazione più ampia possibile, Le vorremmo gentilmente chiedere di segnalare il concorso ai suoi studenti.
 
Per registrarsi alla piattaforma, iscriversi al concorso e trovare tutte le informazioni necessarie basta cliccare questo link. Per vedere il video promo del concorso invece il link è questo.
 
 
Fondazione Italiana Accenture
Title:  An architecture for managing spatio-temporal Big Data at scale (application to IoT)


Specialty: System and Software 
Keywords: Big Data, Storage, Spatial, Temporal, Hadoop, Drill, Index, Query

Advisors
Prof. Christine Collet, Grenoble INP
Dr. Houssem Chihoub, INPG Entreprise SA

Grenoble Informatics Laboratory (LIG) is one of the largest laboratories in Computer Science in France. It is structured as a Joint Research Center (French Unité Mixte de Recherche - UMR) founded by the following institutions: CNRS, Grenoble INP, Inria, and Grenoble Alps University. The mission within this opportunity will be carried out at the LIG laboratory. It is proposed and funded in the context of the ENEDIS industrial chair of excellence on smart grids in partnership with Grenoble INP. 

Description

Nowadays, data come in huge masses from various sources such as web data, IoT data, scientific data, etc. This data deluge, more commonly known as Big Data, have introduced unprecedented performance and scalability challenges to data management and processing systems. In many cases, data are characterized by two very important dimensions: (geographical) space and time. As a result, many analytics applications and algorithms rely on spatial and/or temporal data queries and computations. Over the years, spatio-temporal data querying have been studied extensively in the literature. However, most of the introduced techniques fail to scale with Big Data volumes, or to integrate efficiently with large-scale data processing infrastructures (such as Hadoop).

The subject of this project is research oriented.  Its goal is to design and build a scalable architecture to store, manage, and process spatio-temporal data. 
It will also contribute to provide a high performance and scalable query engine for spatio-temporal data. The architecture will rely on the Hadoop file system (HDFS) to store data. Additionally, a hybrid solution to combine a scalable and distributed query engine with efficient data indexing and partitioning approach that leverage spatio-temporal data characteristics will be introduced. This latter will aim at providing a fast access to data and fast computing while minimizing data transfer and enhancing data locality. The proposed architecture will be further validated and evaluated with the case of IoT data.  A use case that consists of querying collected and historical sensor data deployed nation-wide on the power grid in France will thereby used for this purpose.

The project will consist in the following steps: 
- Study of the state of the art (bibliographical references, latest technologies): spatio-temporal indexing, Big Data management systems, distributed data querying, etc.. 
- Getting familiar with systems and frameworks: Hadoop, Spark, apache drill, mongoDB, 
- General design specification
- Implementation 
- Experimental validation and evaluation (IoT data)
- Documentation 

Used Technologies 
Hadoop and HDFS, Spark, Apache Drill, MongoDB.
 
Required Skills
•	Msc, Master, Bac +5 in computer science / engineering or equivalent
Knowledge about  Big Data processing and management / distributed systems: Hadoop, Spark, NoSQL (mongoDB, HBase, Cassandra), distributed file systems, Apache Drill and SQL.
•	Strong programming skills in one or more languages (Java, C, C++, Python, Scala …)
•	Strong English skills are a plus

Salary 
1500 € gross / month

Duration
6 months

Contact 
Please send CV and cover letter to Christine Collet  (This email address is being protected from spambots. You need JavaScript enabled to view it.
 ) and Houssem Chihoub (This email address is being protected from spambots. You need JavaScript enabled to view it.
 )
Copyright @  2019   DataBase Group for suggestions write to  Webmaster