Category Archives: Articles

Knowledge Management for a Cooperative Design System

Knowledge Management for a Cooperative Design System

Author: Serge Tichkiewitch, Burno Radulescu, George Dragoi, and Kusol Pimapunsri

Abstract: Every five years, the French Ministry of Industry launches a study about the key technologies for the next five years. Knowledge capitalization was one of the mentioned technologies in 2000. This paper starts with the description of some problems forecasted at that time and the actual situation since. In this context, a definition for knowledge management is presented, and some related concepts are proposed.
Finally, it is shown how the expert system technology associated with a cooperative design modeler allows the implementation of the knowledge management concepts.

Knowledge may be universal, vehicular or vernacular. All people normally share universal knowledge. This is for example the case with geometrical knowledge. A specific actor who is only concerned with his or her own job only uses vernacular knowledge. It does not need to be shared. Vehicular knowledge is the type of knowledge which can be exchanged between two or more actors, allowing them for instance to perform collaborative design based on a common understanding. Therefore, the latter type of knowledge is very important for establishing a dialog between two partners.

 

Managing in a Complex Environment

There is also a paradox to be surmounted: the safeguarding of know-how in time, while avoiding the risk of obsolescence of any part of the data.

8.3 Knowledge Management

In order to define and to give characteristics of knowledge management (KM), let us have a look ta the proposition of Y. Malhotra:

“Knowledge management caters to the critical issues of organizational adaptation, survival and competence in face of increasing discontinuous environmental change. Essentially, it embodies organizational processes that seek synergistic combination of data and information processing capacity of information technology and the creative and innovative capacity of human beings.”

This is a strategic view of KM that considers the synergy between technological and behavioral issues as necessary for survival in “turbulent environments”. The need for synergy of technological and human capabilities is based on the distinction between the “old world of business” and the “new world of business.”

Within this view, Malhotra defines the old world of business as characterized by predictable environments in which focus is on prediction and optimization based efficiencies. This is the world of competence based on  “information” as the strategic asset, and the emphasis is on controlling the behavior of organizational agents towards fulfillment of per-specified organizational goals and objectives. Information and control systems are used in this world for achieving the alignment of the organizational actors with predefined “best practices.” the assumption is that such best practice retain their effectiveness over time.

In contrast, high levels of uncertainty and inability to predict the future characterize the new world of business. Use of the information and control systems and compliance with the predefined goals, objectives and best practices may not necessarily achieve long-term organizational competence. This is the world of “re-everything”, which challenges the assumptions underlying the “accepted way of doing things”. This world needs the capability to understand the problem afresh given the changing environmental conditions. The focus is not on finding the right answers but also on finding the right questions. This world is differentiated from the “old world” by its emphasis on “doing the right thing” rather than “doing things right”.

KM is a framework within which the organization views all its processes as knowledge processes. According to this view, all business processes involved creation, dissemination, renewal and application of knowledge toward organizational sustenance and survival.

This concept embodies a transition from the recently popular concept of “information value chain” to a “knowledge value chain”.  What is the difference? The information value chain, considers technological systems as key components guiding the organization’s business processes, while treating humans as relatively passive processors that implement “best practices” archived in information databases. In contrast, the knowledge value chain treats human systems as key component that engage in continuous assessment of information archived in the technological system. In this view, the human actors do not implement best practices without active inquiry. Human actors engage in an active process of sense making to continuously assess the effectiveness of best practices.  The underlying premise is that the best practices of yesterday may not be taken for granted as best practices of today or tomorrow. Hence double loop learning, unlearning and relearning processes need to be designed into the organizational business processes.

Artificial Intelligence was a new technique which permitted the computer not only to solve equations but also to reason as an intelligent actor in order to solve problems or to give diagnoses. Prolog, Frames, Production Rules, and Case-based Reasoning are the new language used for the description of Expert Systems.

 

Structural and Functional Analysis for Assemblies

Structural and Functional Analysis for Assemblies

Author:  Hugo Falgarone and Nicolas Chevassus

Abstract: This article presents a systemic method for designing assemblies. It is based on generic concepts such as modeling of assemblies using assembly nested graphs which reflect the product design breakdown, the interfaces between components. The proposed method enables to assess the product producibility and the robustness of the assembly process. It eases impact analysis following changes of modified product functions or features.
A software tool, called GAIA, has been developed to support this method; based on a user-friendly interface. It enables specifying assemblies through interfaces and performing a functional and structural analysis of assemblies. Interoperable with the Digital Mock-up and Product Management Systems, it speeds up design changes and impact analysis. Finally, it is useful to grasp the design intents and to capitalize and reuse this design knowledge.
The adoption of this advanced modeling technique in support of the engineering assembly process improves the quality of designed products and reduces the cost of change management, customization and fault rectification by solving assembly issues at the design stage.

 

 7.1 Industrial Background

Traditional Computer Aided Design tools (CAD) help designers to set up product geometrical definitions. However, these tools do not easily capture designers’ intent as it should be needed in order to record the functional specification cascade with respect to the product’s breakdown. For complex assemblies, the main hurdle that prevents designers from understanding the results of systemic analysis deal s with the lack of representation over the product 3D geometry of both functional requirements and interfaces between components.

7.4 GAIA Software for Systemic Analysis of Assemblies

The EADS Corporate Research Centre has developed a new design tool to support the method presented. This innovative piece of software is called GAIA which means Graphical Analysis of Interfaces for Assemblies. This software tool enables to grasp the design intent, the product structural and functional interfaces and the manufacturing process decisions with a user -friendly graphical user interface (GUI). The corresponding product-process specifications can be exported to various product life-cycle management (PLM) and PDM systems.

GAIA

GAIA is based on a MS Visio user interface and looks like an office tool Its main features are:

  • Support of the presented design method
  • Easy to handle
  • Adaptability to many engineering applications and
  • Client-server architecture with a multi-user database

GAIA provides a common framework and repository throughout the product-process design phases for supporting various methodologies and tools. Its main advantages are:

  • GAIA is a visual tool for specifying assemblies through interfaces
  • GAIA supports the functional and structural analysis for assemblies
  • GAIA speeds up design changes and impact analysis
  • GAIA, coupled with CAD and Computer Aided Process tools (CAP), enabled iterative design from GAIA specification to CAD/CAP definition and back.
  • GAIA makes it possible to capitalize and reuse design knowledge about assemblies.

 

Reusing Design Knowledge

Reusing Design Knowledge

Author: Saeema Ahmed and Ken Wallace

Abstract: The long-term aim of this research is to develop a method of indexing design knowledge that is intuitive to engineering designers and therefore assists the designers to retrieve relevant information. This paper describes the development and preliminary evaluation of a method of indexing design knowledge. The concepts for the method have been elicited from designers' descriptions of the design process. The method has been evaluated by indexing 92 reports related to one particular aero-engine

Prior to the interviews, it was hypothesized that how designer describe their processes of designing can be classified in four ways:

  1. the process itself, i.e. a description of the different tasks at each stage of the design process
  2. the physical product to be produced, i.e. the product, components, sub-assemblies and assemblies
  3. the functions that must be fulfilled by a particular component or assembly
  4. the issues whilst carrying out the design process there are several considerations the designer must make whilst designing, i.e. issues

reuse_know

6.5 Key Conclusions

The evaluation of the functions taxonomy suggested a need to combine the function taxonomy with a product and issues taxonomy to avoid loss of information.

Towards a Design Methodology for Self-optimizing Systems

Towards a Design Methodology for Self-Optimizing Systems

Author: Jurgen Gausemeier, Ursula Frank, Andreas Schmidt, and Daniel Steffen

Abstract: Self-optimizing systems will be able to react autonomously and flexibly to changing environments. They will learn and optimize their performance during their product life cycle. The key for the design of self-optimizing systems is to utilize reconfigurable system elements, communication structures and experienced knowledge. The concept of active principles of Self-Optimization is an important starting point.

5.2 Self-optimizing Systems

In terms of software engineering, this involves distributed systems of interacting agents:

“An agent is an autonomous, proactive, cooperative and extremely adaptive function module. The term “autonomous” implies an independent control system, which it proactively initiates actions.  Agents are regarded as function modules, which work in cooperation or competition with one another. “Adaptive” refers to a generic behavior at run time, which may also, for example, include learning capabilities. A function module is taken to be a heterogeneous subsystem with electronic, mechanical and IT-related components.”

Combining the paradigm of intelligent agents with mechatronic structures makes it possible to construct self-optimizing mechanical engineering systems.

“Self-optimization of a technical system refers to the endogenous modification of the target vector due to changing environmental conditions and the resulting target-compliant, autonomous adaptation of the structure, the behavior and the parameters of this system. Self-optimization, therefore, far exceeds known control and adaptation strategies. Self-optimization enables empowered systems with inherent “intelligence,” which are able to react autonomously and flexibly to changing environmental solutions”

The examination of self-optimizing systems is based on four aspects:

  1. the target system (e.g. a hierarchical system of targets or a target vector)
  2. the structure (i.e. topology of mechanical components, sensors and actuators),
  3. the behavior and
  4. the parameters.

The following principles determine Self-optimization:

  • Reconfiguring system elements
    • An adaptation to different environmental situations presupposes the presence of system elements which can be reconfigured or which can interact with other system elements in different combinations. In a chassis, for example, redundant actors (mechanical feather/spring, pneumatic spring, hydraulic cylinder) are used. They are used together in different ways (parallel/in series) to absorb different stimuli.
  • Communication
    • System elements behave like software agents. They pursue their targets according to the target system of the overall system. They achieve these targets by negotiations and co-operation with other system elements. For adjustment processes and negotiation principles, generic patterns are defined. Examples for communication relations are the chassis reconfiguration or an arrangement about the right of way between two vehicles.
  • Experienced knowledge
    • In order to ensure the optimal behavior in unknown operating situations or in situations that are not described in models, experienced knowledge embodied as cases is stored and used again in similar situations. It is shared with other systems, as well. So-called active principles of Self-optimization describe generic patterns of behavior, which can be used in many situations. Especially the use of active principles of Self-optimization creates greater opportunities and enables absolutely new functionalities.

 Active Principles of Self-optimization

Active principles of Self-optimization are meant to be a combination of a technical system and the influences on the technical system (the environment, the user, or other system elements) and adaptation components. The technical system consists of a structure model, in terms of the topology of mechanical components or the hierarchy of multi-agent systems, a behaviour model, such as differential equations or planning and learning systems, and the parameterization of the models. A target system prescribes the current goals which the technical system tries to achieve. In this way the active principle of Self-optimization allows for the endogenous
modification of the technical system according to changing influences, as well as for target-compliant, autonomous adaptation of parameters, behaviour and structure. Adaptation strategies and adaptation tactics define the kind and process of modifications for long-term and medium- to short-term adaptation to application scenarios. Adaptation costs represent the effort of adaptation in terms of energy consumption, time-delays, monetary payments and the like.

 

multi_agent_coop

Self Organization in Design

Self Organization in Design

Author: Bart R. Meijer

Abstract: Principles of self organization are discussed as a frame of reference and a source of ideas for new design processes that can deal with more complexity in less time. It is demonstrated that set-based concurrent engineering makes effective use of these principles. Taking this idea one step further, an evolutionary organization for design processes is proposed.

Most academic institutions still teach structured design to their students Not because it is the best method, guaranteed to lead to good designs, but merely because it addresses all the relevant areas of design processes in a comprehensible way to students unaware of their own early design experiences.

Axiomatic design is not fundamentally different from structured design. The design phases are roughly identical. Axiomatic design is characterized by the design matrices, which represent an efficient data representation. They show where design decisions are complicated (coupled) and where they are not. The problem of developing a set of uncoupled or decoupled design matrices spanning the design space for our problem is as complex as solving the design problem by using structured design.

Following the principle of structured or axiomatic design, one could easily see a phased plan, perfectly fit for a work breakdown structure and presumably fit for effective and efficient development process. Industrial practice shows that this approach often results either in risk-adverse incremental development of a known concept, or in cyclic hard to finalize development processes in case a new concept was pursued. It is very hard to predict up-front what dependencies in which concepts are vital to a successful design. As a consequence, the product of architectures of cars and aircraft have not changed significantly over decades of their existence. Despite claims that technology developments are speeding up, the impact of new technology or new materials is often limited to redesign of subsystems. The problem of introducing new and unknown relations are avoided as much as possible. The opportunities of new business models are the scope of implementing new technologies into existing product platforms.

 4.2 Concepts of Self Organization

In the area of systems control and cybernetics, self organization refers to systems that are capable of changing their structure and their functionality on order to adapt to new environments.  Another perspective on self organization originates from a systems perspective on understanding nature, life and organizations. This perspective, called autopoiesis (self-production), does not take adaptation as a response to changes in the environment as axiom, but it claims that living structures influence or adapt to their environment as a means to self-maintain and improve their chance of reproduction.

Autopoiesis

An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components that produce components which:

  1. through their actions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and
  2. constitute it (the machine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realization as such a network.

Evolutionary Problem Solving

Evolutionary problem solving is based on the structure of genetic algorithms. The basic structure of the genetic algorithm, originally developed by Holland is as follows:

  1. Initialize a starting population of physically feasible solutions.
  2. Create a new generation through genetic operands such as mutation, crossover and reproduction
  3. Rank this population using the fitness function.
  4. Select the top of this population and randomly select a couple of others to create a new starting population
  5. Repeat steps 2-4 until top-member of a generation has sufficient fitness score to be acceptable as a solution.

The success of this nature-inspired algorithm can be attributed to two properties that make it distinct from linear optimization techniques. The first property is redundancy and diversity. Rather than developing one solution, genetic algorithms develop and maintain multiple solutions concurrently. The resulting diversity is needed to maximize the probability to have solutions available at all times that can comply with all requirements and constraints. The second property is the non-linearity of the selection process. With linear optimization the fitness landscape is set from the start by the starting solution and the fitness function. Finding the optimum in this landscape could mean an exhaustive search through the entire landscape. Although the fitness landscape is set from the start, a genetic algorithm employs multiple starting points for the search, and the generation and selection steps cause the effective fitness landscape to be reshaped at the start of each generation.

Evolutionary Problem Solving and Self Organization

The genetic algorithm (GA) is a system model for the self-reproduction principles of the autopoiesis theorem. The solution patterns a GA may generate are predominantly the result of the initial set of solutions that were present at the start. The fitness function is the context within which structure changes may occur as long as survival as a unity or species is not at stake. Changing the fitness function will cause serious changes and may also cause death in case the present elements can not generate a sufficient fit (survival) to the new fitness function. in case of survival, biologists may recognize evolution, but they may also claim that the new organism is a different unity that is capable of a different set of interactions, fit for the new context. Thus the old species is declared extinct since it evolved into a new distinguishable organism.

4.3 Set-based Concurrent Engineering

Ward and his co-authors argue that in concurrent engineering there are two fundamentally different approaches to be recognized: point-based and set-based. In case of point-based design, a single solution is synthesized first, then analyzed and changed accordingly.  Even though the phases of the design process may be executed concurrently, all designers and specialists are investing their efforts in the pursuit of only one concept that is to be developed into a solution.

In set-based concurrent engineering, designers explicitly communicate and think about sets of design alternatives at both conceptual and parametric levels. The efficiency of set-based versus point-based design is that in communicating sets, implicitly or explicitly, all designers become more focused on relations and constraints between different aspects of the design than they would be when focusing at a point solution. All designers communicate their range of options rather than one preferred option. Sometimes to maintain focus, constraints for these sets can be set tighter than they would be in case of a point based design.

Set-based Concurrent Engineering: Toyota and Nippondenso

The set-based engineering process of Toyota have the following characteristics:

  1. The team defines a set of solutions at the system level rather than a single solution
  2. It defines sets of possible solutions for various subsystems.
  3. It explores these possible solutions in parallel, using analysis, design rules and experiments to characterize a set of possible solutions.
  4. It uses analysis to gradually narrow the sets of solutions. In particular the team uses analysis of the set of possibilities for subsystems to determine appropriate specifications to impost on those subsystems
  5. Once the team establishes a single solution for any part of the design, it does not change it unless absolutely necessary

 

Nippondenso. . . .also applies a process that has characteristics of set-based concurrent engineering and extends this even to pre-design R&D. In this process, the degree of parallelism and redundancy is much higher than it typically is with Toyota.

As an automotive supplier, the demand for diversity is higher and their competitiveness is much affected by new technologies and new materials. In order to push the limits and to stay ahead of the competition, Nippondenso tests as many ideas as they can to create a platform (set) of solutions that is competitive and can be easily adapted to the specific interfacing requirements of different car makes. What may be a surprise is that the start of Nippondenso’s development processes may be 3-5 years ahead of the start of the car development processes that adopt the new designs. Rather than pursuing rapid development once the outline of specification from their customers is clear, Nippondenso pursues radical breakthrough designs that are ready before their customers ask for them. When they start working with their customers, the focus is on interfacing and not on the core technology, which enables them to avoid the major part of development risk.

4.4 Evolutionary Organization of Design Process

The process can be as follows:

  1. Divide the staff into n independent teams that are all capable of executing the entire project, and give all these teams an identical assignment and a deadline.
  2. The teams will develop their concepts and solutions following set-based concurrent engineering practices, and they will record their achievements and findings in lessons learned books.
  3. At regular intervals, a fair is organized where all teams present their progress and give insight int their lessons learned books.
  4. At these fairs, team members look around for promising partial solutions with their “competitors”
  5. After the fair, teams continue their own development, including ideas inspired by the last fair.
  6. If a design with sufficient fitness has been achieved, stop; else, repeat steps 3-5.

The processes within the teams could also have the characteristics of a genetic algorithm if they apply brainstorming for finding and selecting ideas. However, the fair is really the place where crossovers and mutations occur. At the fair, everyone is looking for clever ideas that could fit to their own concept (crossings), and some ideas may also trigger new thoughts (mutations). Although a fitness function that could be used for ranking exists, the organizational form of a GA has the advantage that the ranking of partial ideas is fuzzy and not explicit. This means that ideas that may not be very successful in one context could be a perfect fit in another context. In case of explicit ranking, these ideas could have been lost. The implicit ranking also solves a social problem of working with a large engineering group force, where a dozen socially dominant engineers will monopolize the decision making at centralized meetings to a degree where a significant portion of the engineering staff effectively has no influence. Because the central meeting is now a fair where implicit recognition is the mechanism for the survival of ideas, good ideas, regardless of their source, stand a good chance of being inherited into the final concept. The process can be made more efficient if overlaid with a structured design process where the progress at the exchange moments (fairs) becomes synchronized.