Design concept evaluation is a key process in the new product development process with a significant impact on the product’s success and total cost over its life cycle. This paper is motivated by two limitations of the state-of-the-art in concept evaluation: (1) the amount and diversity of user feedback and insights utilized by existing concept evaluation methods such as quality function deployment are limited. (2) Subjective concept evaluation methods require significant manual effort which in turn may limit the number of concepts considered for evaluation. A deep multimodal design evaluation (DMDE) model is proposed in this paper to bridge these gaps by providing designers with an accurate and scalable prediction of new concepts’ overall and attribute-level desirability based on large-scale user reviews on existing designs. The attribute-level sentiment intensities of users are first extracted and aggregated from online reviews. A multimodal deep regression model is then developed to predict the overall and attribute-level sentiment values based on the features extracted from orthographic product images via a fine-tuned ResNet-50 model and from product descriptions via a fine-tuned bidirectional encoder representations from transformer model and aggregated using a novel self-attention-based fusion model. The DMDE model adds a data-driven, user-centered loop within the concept development process to better inform the concept evaluation process. Numerical experiments on a large dataset from an online footwear store indicate a promising performance by the DMDE model with 0.001 MSE loss and over 99.1% accuracy.
Future industrial automation systems are anticipated to be shaped by intelligent technologies that allow for the adaptability of machines to the variations and uncertainties in processes and work environments. This paper is motivated by the need for devising new intelligent methods that enable efficient and scalable training of collaborative robots on a variety of tasks that foster their adaptability to new tasks and environments. Recent advances in deep Reinforcement Learning (RL) provide new possibilities to realize this vision. The state-of-the-art in deep RL offers proven algorithms that enable autonomous learning and mastery of a variety of robotic manipulation tasks with minimal human intervention. However, current deep RL algorithms predominantly specialize in a narrow range of tasks, are sample inefficient, and lack sufficient stability, which hinders their adoption in real-life, industrial settings. This paper develops and tests a Hyper-Actor Soft Actor-Critic (HASAC) deep RL framework based on the notions of task modularization and transfer learning to tackle this limitation. The goal of the proposed HASAC is to enhance an agent's adaptability to new tasks by transferring the learned policies of former tasks to the new task through a "hyper-actor". The HASAC framework is tested on the virtual robotic manipulation benchmark, Meta-World. Numerical experiments indicate superior performance by HASAC over state-of-the-art deep RL algorithms in terms of reward value, success rate, and task completion time.
Devising intelligent systems capable of identifying the idiosyncratic needs of users at scale and translating them into attribute-level design feedback and recommendations is a key prerequisite for successful user-centered design processes. Recent studies show that 49% of design firms lack systems and tools for monitoring external platforms, and only 8% have adopted digital, data-driven approaches for new product development despite acknowledging them as a high priority. The state-of-the-art attribute-level sentiment analysis approaches based on deep learning have achieved promising results; however, these methods pose strict preconditions, require manually labeled data for training and pre-defined attributes by experts, and only classify sentiments intro predefined categories which have limited implications for designers. This article develops a rule-based methodology for extracting and analyzing the sentiment expressions of users on a large scale, from myriad reviews available on social media and e-commerce platforms. The methodology further advances current unsupervised attribute-level sentiment analysis approaches by enabling efficient identification and mapping of sentiment expressions of individual users onto their respective attributes. Experiments on a large dataset scraped from a major e-commerce retail store for apparel and indicate 74.3%–93.8% precision in extracting attribute-level sentiment expressions of users and demonstrate the feasibility and potentials of the developed methodology for large-scale need finding from user reviews.
The designers' tendency to adhere to a specific mental set and heavy emotional investment in their initial ideas often limit their ability to innovate during the design ideation process. The shrinking time-to-market and the growing diversity of users' needs further exacerbate this gap. Recent advances in deep generative models have created new possibilities to overcome the cognitive obstacles of designers through automated generation or editing of design concepts. This article explores the capabilities of generative adversarial networks (GAN) for automated, attribute-aware generative design of the visual attributes of a product. Specifically, a design attribute GAN (DA-GAN) model is developed for automated generation of fashion product images with the desired visual attributes. Experiments on a large fashion dataset signify the potentials of GAN for attribute-aware generative design, verify the ability of editing attributes with relatively higher accuracy and uncover several key challenges and research questions for future work.
Yuan, Moghaddam, 2020
This paper aims at advancing the fundamental understanding of the affordances of Augmented Reality (AR) as a workplace-based learning and training technology in supporting manual or semi-automated manufacturing tasks that involve both complex manipulation and reasoning. Between-subject laboratory experiments involving 20 participants are conducted on a real-life electro-mechanical assembly task to investigate the impacts of various modes of information delivery through AR compared to traditional training methods on task efficiency, number of errors, learning, independence, and cognitive load. The AR application is developed in Unity and deployed on HoloLens 2 headsets. Interviews with experts from industry and academia are also conducted to create new insights into the affordances of AR as a training versus assistive tool for manufacturing workers, as well as the need for intelligent mechanisms that enable adaptive and personalized interactions between workers and AR. The findings indicate that despite comparable performance between the AR and control groups in terms of task completion time, learning curve, and independence from instructions, AR dramatically decreases the number of errors compared to traditional instruction, which is sustained after the AR support is removed. Several insights drawn from the experiments and expert interviews are discussed to inform the design of future AR technologies for both training and assisting incumbent and future manufacturing workers on complex manipulation and reasoning tasks.
Eliciting user needs for individual components and features of a product or a service on a large scale is a key requirement for innovative design. Synthesizing data as an initial discovery phase of a design process is usually accomplished with a small number of participants, employing qualitative research methods such as observations, focus groups, and interviews. This leaves an entire swath of pertinent user behavior, preferences, and opinions not captured. Sentiment analysis is a key enabler for large-scale need finding from online user reviews generated on a regular basis. A major limitation of current sentiment analysis approaches used in design sciences, however, is the need for laborious labeling and annotation of large review datasets for training, which in turn hinders their scalability and transferability across different domains. This article proposes an efficient and scalable methodology for automated and large-scale elicitation of attribute-level user needs. The methodology builds on the state-of-the-art pretrained deep language model, BERT (Bidirectional Encoder Representations from Transformers), with new convolutional net and named entity recognition (NER) layers for extracting attribute, description, and sentiment words from online user review corpora. The machine translation algorithm BLEU (BiLingual Evaluation Understudy) is utilized to extract need expressions in the form of predefined part-of-speech combinations (e.g., adjective–noun, verb–noun). Numerical experiments are conducted on a large dataset scraped from a major e-commerce retail store for apparel and footwear to demonstrate the performance, feasibility, and potentials of the developed methodology.
Journal of Mechanical Design
Han, Moghaddam, 2021
The vision of Industry 4.0 is to materialize the notion of a lot-size of one through enhanced adaptability and resilience of manufacturing and logistics operations to dynamic changes or deviations on the shop floor. This article is motivated by the lack of formal methods for efficient transfer of knowledge across different yet interrelated tasks, with special reference to collaborative robotic operations such as material handling, machine tending, assembly, and inspection. We propose a meta reinforcement learning framework to enhance the adaptability of collaborative robots to new tasks through task modularization and efficient transfer of policies from previously learned task modules. Our experiments on the OpenAI Gym Robotics environments Reach, Push, and Pick-and-Place indicate an average 75% reduction in the number of iterations to achieve a 60% success rate as well as a 50%-80% improvement in task completion efficiency, compared to the deep deterministic policy gradient (DDPG) algorithm as a baseline. The significant improvements achieved in the jumpstart and asymptotic performance of the robot create new opportunities for investigating the current limitations of learning robots in industrial settings, associated with sample inefficiency and specialization on one task through modularization and transfer learning.
Journal of Mechanical Design
Chen, Heydari, Moghaddam, 2021