AI4b.io in Eindhoven for the ICAI day on "Enhancing Human Interaction with AI"

AI4b.io just returned from an ICAI day on "Enhancing Human Interaction with AI" and came back inspired. While people were passing by our poster showcasing the five AI4b.io projects, we also enjoyed high-quality presentations and hosted a roundtable discussion. To give an impression:
Bert de Vries (Professor of Natural Artificial Intelligence, TU Eindhoven) gave a talk "Natural Artificial Intelligence". He presented his work on developing algorithms that mimic human learning more closely, focusing on efficiency and robustness. He highlighted the significant energy difference between the human brain, which operates on 20 watts, and AI systems, which require vast computational resources. This comparison emphasized the potential gains if AI could leverage human brain efficiency and resilience, especially in retaining skills over time despite biological limitations.
Another enlighting talk titled "Semantic World Models as an Enabler to Integrate Black-Box AI with White-Box Control in Industry-Grade Robotics Applications" - was given by Herman Bruyninckx (Full Professor, Mechanical Engineering, TU Eindhoven & KU Leuven). In his talk, we explored how combining AI with physical models can enhance AI's decision-making. He argued that where robust models for processes exist, they should be used instead of developing black-box models, which can be opaque. Bruyninckx humorously pointed out that while AI appears to be advancing, humans may be “unlearning” through over-reliance on tools like ChatGPT, thus lowering the bar for what AI needs to achieve to seem intelligent.
On top of that, we also had the opportunity to engage in a roundtable discussion. Renger chaired a session focused on "Consistency in AI: Achieving Repeatability and Reproducibility," with an emphasis on transparent audit trails and best practices for dependable AI. Our group, a mix of academia and industry experts, discussed:
- Benchmarks and Overfitting: Echoing Bruyninckx's points, we examined the risk of overfitting when benchmarks dominate training goals. One proposed solution was blind testing, with limited attempts on the same dataset to prevent models from merely fitting benchmarks, similar to how Kaggle challenges are structured.
- Uncertainty in Model Outcomes: We considered whether all AI outcomes should include uncertainty measures to build user trust. The question arose of whether users would accept results when informed of an 80% confidence level in a prediction, a practice potentially beneficial for transparency yet possibly unsettling for some users.
What a day! And there is more to come. AI4b.io will be present at the BNAIC conference, you can check out the program here. Hopefully see you there!