4th International Conference on Alternative Fuels, Energy and Environment (ICAFEE): Future and Challenges

18th – 21st, October 2019

Feng Chia University, Taichung, Taiwan

Future Challenges :: Artificial Intelligence with Big Data Applications


Indexing: SCI, SCIE, SCOPUS, etc.,


Extended Version of the articles submitted for the conference special session are possibly considered for the Journal of Testing and Evaluation special issue.

Important Dates

Submission Deadline 15th September 2019
Acceptance Notification 17th September 2019
Registration Deadline 20th September 2019
Registration Fee: 150 USD

Instructions to Authors

  • Please send your paper (only in word format) to rmohanasundaram.vit@gmail.com with 'ICAFE2019' mentioned in the subject line.
  • Papers should be in single column format.
  • Number of pages should not exceed 8.
  • Corresponding Author mail id should me mentioned in paper.
  • All the references should be cited properly.
  • If you used any human image for simulation purpose, source and acknowledgement is mandatory.

Special Issue Theme / Outline

Artificial intelligence is set to reshape business and society. For AI to yield economic value, however, designing algorithms compatible with human thought processes is critical.The ability of artificial intelligence (AI) applications to automate tasks associated with human knowledge is rapidly progressing. Examples include recognizing faces, sensing emotions, driving cars, interpreting spoken language, and reading text, writing reports, grading student papers, and even setting people up on dates. Yet at a business level, AI projects often fail to deliver desired outcomes because they are not designed to promote smart adoption by human users. Human-centered design of AI algorithms is therefore crucial.

There are several reasons autonomous AI applications benefit from incorporating human-centered design principles:

Goal relevance

AI applications are most valuable when designed to satisfy the intrinsic needs of human end users. For example, typing “area of Poland” into a search engine might yield not only the literal answer (120,728 square miles) but also the note, “about equal to the size of Nevada.” The numeric answer is more accurate, but the intuitive answer is often more useful.


Many AI systems run on autopilot but benefit from human intervention in handoff situations that require contextual understanding. For example, garbled driving directions from a smartphone app sometimes cause people to get lost even on simple trips requiring only a small amount of local knowledge or common sense. In such situations, a “low confidence” or “potentially high interference” warning message might nudge the driver to favor his or her own common sense over a potentially misleading algorithmic indication.

Feedback loops

Automated algorithmic decisions can reflect and amplify undesirable user behavior and societal biases in data on which they are trained. A vivid recent example is a chatbot designed to learn about the world through conversations with its users. Within 24 hours it needed to be switched off because pranksters trained it with racist, sexist, and fascist statements. Accordingly, there is an increasing call for chatbot and search-engine design to optimize not only for speed and algorithmic accuracy, but also for user conduct and biases encoded in data.

Psychological impact

Just as user behavior can impair algorithms, algorithms can impair user behavior. For instance, there is increasing concern that collaborative filtering of news and commentary can lead to “gated communities” of polarized opinion. Going forward, social media recommendation engines may benefit from forms of human-centered design, such as features that enable the spontaneous, serendipitous discovery of alternative news stories and opinion pieces to help ward off polarization and groupthink.

As these examples illustrate, it can be counterproductive to deploy autonomous AI systems without comparably sophisticated human-centered design that suitably reflects the information, goals, and constraints people weigh when arriving at decisions. End users and individuals with domain and institutional knowledge can be consulted to promote algorithm designs that better anticipate the realities of the eventual AI use case. Moreover, it is important to clearly communicate an algorithm’s assumptions, limitations, and data features to end users through clear writing and intuitive information visualization. It is also wise to establish guidelines and business rules to convert AI predictions into prescriptions and to suggest when and how users might either override the algorithm or complement its recommendations with other information.

'3D' for Algorithms

Predictive algorithms yield business value when they are used to drive measurable results. This suggests a general principle that might be called 3D: Data and digital technologies are excellent business enablers, but they must typically be infused with psychologically informed design to drive better engagement and business outcomes.

Predictive algorithms yield business value when they are used to drive measurable results. This suggests a general principle that might be called 3D: Data and digital technologies are excellent business enablers, but they must typically be infused with psychologically informed design to drive better engagement and business outcomes.

Behavioural design thinking takes this idea further. Natural language generation AI tools could automatically produce periodic data-rich reports containing helpful tips and behavioural nudge messages based on peer comparisons. For example, an AI system that informs the driver his highway behaviour is riskier than that of his peers might effectively and cheaply prompt safer driving. Such strategies can make companies such as insurers less product-focused and more customercentric in a way that benefits the organization, the consumer, and society as a whole. This special issue will explore the recent developments in the research perspective.

Topics to be covered:

1. Natural Language Generation.
2. Speech Recognition.
3. Virtual Agents.
4. Machine Learning Platforms.
5. AI-optimized Hardware.
6. Decision Management.
7. Deep Learning Platforms.
8. Biometrics.
9. Robotic Process Automation.
10. Text Analytics and NLP.

Special Issue Editors

Dr. Mohanasundaram Ranganathan
Dr. Suresh Kumar Nagarajan

Vellore Institute of Technology, Vellore
Tamil Nadu, India.


Assoc. Prof. Dr. Chyi-How Lay

Feng Chia University, Taiwan

ICAFEE Series Honorary Chairs

Prof. Dr. Ashok Pandey

CSIR-Indian Institute of Toxicology Research, India

ICAFEE Series Honorary Chairs

Prof. Dr. Sebahattin Ünalan

Erciyes University, Turkey

ICAFEE Series Honorary Chairs

Prof. Dr. Gro Johnsen

University of Stavanger, Norway

ICAFEE Series Chairs Founder

Asst. Prof. Dr. Abdulaziz Atabani

Head, Alternative Fuels Research Laboratory (AFRL)
Erciyes University, Turkey

ICAFEE Series Chairs Co-Founder

Assoc. Prof. Dr. Gopalakrishnan Kumar

University of Stavanger, Norway

Copyright © ICAFEE 2019
Flag Counter