Current research is focussed on topics around our fleet management system ConnectFleet, mobility solutions, and autonomous driving. ConnectFleet is a fleet management system offering fleet managers various services to make their daily business easier.
The main research fields are MaaS/TaaS (mobility and transportation as a service), building virtually highly distributed high-performance teams and e-mobility. In addition to this, we build a data science platform for processing terrabytes of car data. You find my "Google scholar profile here".
My past research during the time in Berlin focussed on methods for A/B test context validation in dynamic environments for web applications.
Data-driven and continuous development and deployment of modern web applications depend critically on registering changes as fast as possible, paving the way for short innovation cycles. A/B testing is a good candidate for comparing the performance of different versions. Despite the widespread use of A/B tests, there is little research on how to assert the validity of such tests. Even small changes in the application’s user base, hard- or software stack not related to the variants under test can transform on possibly hidden paths into significant disturbances of the overall performance of an A/B test and, hence, invalidate such a test. Therefore, the highly dynamic server and client run-time environments of modern web applications make it difficult to correctly assert the validity of an A/B test.
Michael Nolting and Jan Eike von Seggern. 2016. Context-based A/B Test Validation. In Proceedings of the 25th International Conference Companion on World Wide Web (WWW '16 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 277-278. DOI: https://doi.org/10.1145/2872518.2889306
During my time at Volkswagen research, I filed the following patent: Vehicle Internet API
In a sentence: An API for creating car-centric Apps to build an AppStore for car manufacturers.
I did my Ph.D. at the University of Hanover under the supervision of Prof. Jörg Hähner and Prof. Monika Sester. I started in August 2007 and submitted my thesis in July 2010. It took about three years of work to complete the Ph.D., which is not too bad for a 100% research program (only a minimum of lecturing was required).
People often ask me how to become a machine learning engineer. The Ph.D. was my way of doing that. By starting a Ph.D. in systems engineering, I knew that I would have the opportunity to work on interesting problems. The Ph.D. opened me a door to a fascinating job market due to the boom in engineering complex data-driven systems, which wasn’t something I expected. All I was hoping to achieve was being qualified to work on more interesting stuff than plain software engineers.
Broadly speaking, the topics of the Ph.D. were in the areas of real-time configuration of active cameras. Although seeming to have not much in common with big data analytics, active cameras implement the map/reduce paradigm by pre-processing vast amounts of image data locally to send only abstract information. Furthermore, they have to analyze jointly (a distributed reduce step) which targets to observe and predict their optimal positions, which is very similar to methods used for predictive analytics for real-time processing. Based on the algorithm I developed in my Ph.D. you could build a distributed cross-brand RoboTaxi system.
The title of my thesis is "Dynamic reconfiguration algorithms for Active Camera Networks". The short, human-friendly abstract is:
Active Camera Networks consist of autonomous vehicles - each one equipped with a visual sensor - communicating wirelessly with each other to perform surveillance tasks in a collaborative way. This thesis is devoted to the problem of wide-area target acquisition of moving targets in a surveillance area. It addresses application scenarios where events unfold over a large geographic area, and close-up views have to be acquired for biometric tasks such as face detection. The main problem is to coordinate numerous cameras to reach a system behavior that only one capture of each target is acquired.
The main publications that resulted from my Ph.D. work are as follows (before I married my last name was Wittke):
M. Wittke, C. Grenz, and J. Hähner: "Towards Organic Active Vision Systems for Visual Surveillance", ARCS '11, 24th International Conference on Architecture of Computing Systems 2011, February 2011
M. Wittke and J. Hähner: "Distributed Vision Graph Update in Mobile Vision Networks", ARCS '10, 23rd International Conference on Architecture of Computing Systems 2010, Workshop Proceedings
M. Wittke and J. Hähner: "Self-balancing Reconfiguration Mechanisms for Active Vision Systems", DEBS '10, Ph.D. Forum - 4th ACM International Conference on Distributed Event-Based Systems, July 2010
M. Wittke, U. Jänen, A. Duraslan, E. Cakar, M. Steinberg and J. Brehm: "Activity Recognition using Optical Sensors on Mobile Phones", Informatics Society Annual Meeting, 2009, 2181-2194
M. Hoffmann, M. Wittke, J. Hähner and C. Müller-Schloer, "Spatial Partitioning in Self-organising Camera Systems", IEEE Journal of Selected Topics in Signal Processing, Aug 2008, 2
M. Wittke, M. Hoffmann, J. Hähner and C. Müller-Schloer: "MIDSCA: Towards a Smart Camera Architecture of Mobile Internet Devices", ICDSC '08, Second ACM/IEEE International Conference on Distributed Smart Cameras, Sept. 2008.