Wednesday, January 23, 2013

The Geometry Kernel and What it Means to Product Development

There certainly has been much talk about the CAD geometry kernel in the last few years. Much of this talk comes from the rumors around Dassault Systems changing the SolidWorks kernel. I think it is clear now that Dassault has no plans to change the SolidWorks kernel, but rather is developing a new product on the Dassault kernel.

So what's the big deal about the geometry kernel anyway? Why do we even care about what kernel is under the covers of our favorite CAD tool? Should we care about it? Do you know how the geometry kernel can impact your ability to be effective in the design of your products? Strangely, the answer to these questions depends on many factors.


I know I am going to over simplify this, but here it goes anyway. In its purest form the CAD kernel is a geometry engine. It takes instructions, processes the instructions, and delivers results. The results are typically in the form of geometry. Every geometry kernel on the market can be unique in many different ways.


  • The instructions or "functions" that the kernel will accept are very specific to the kernel. Also each function has a specific set of operatives or parameters that go with it. These of course have to be presented to the kernel in the appropriate format. There are no standards for kernel functions/instructions.
  • Kernels can also define geometry in many different forms. Some kernels understand analytic geometry, some understand B-Splines, some understand NURBS, and so on. Some might understand all of the previous and know how to merge the different definitions. There are no standards as to how kernels mix and match these definitions.
  • Geometry kernels can also work at different geometry resolutions (accuracy). Some kernels are more stable at very low resolution, while others work more effective at a higher resolution. Most kernels provide adjustable geometry resolution and the CAD system typically controls this, either through a setting or automatically. There is no standard geometry resolution for CAD geometry.
  • Geometry kernels can also calculate the geometric results differently. Perhaps the easiest way to see this is with the corner "Round" or "Blend" function. Every kernel out there will calculate the vertex region differently - and it is very visible. There are no standards as to how a kernel calculates this type of geometry. Here are some examples from some of the more popular CAD tools on the market. Pay close attention to the differences in topology in these examples. Also note in these examples how the planar faces are effected differently. I can usually tell which CAD system was used to create a 3D model just by looking at the rounded corners. There are no standards for these types of calculations.

same geometry with same radius in all 4 cases
all blends done in one function/feature

Those are some of the big differences. There are many more, especially in the area of freeform surface definition.

So, again, what does all this kernel stuff mean to the product development process?

When we are working with CAD we are typically creating two layers of information. The TOP layer is the feature definition including any import or non-ordered geometry, sketches, parameters and other modeling functions. This is the history tree, or some may call it the "design intent". On the BOTTOM layer is the resulting geometry. (Of course with direct modeling all you have is the bottom layer.) When moving CAD data from one kernel to another, both layers need to be considered.

Let’s consider the TOP layer first; it is actually the most complex. (And for you direct modeling people you don't need to be concerned with this layer. You can skip to the BOTTOM layer covered below). A parametric history-based CAD system is basically recording every function it sends to the kernel into the history tree. These kernel functions and their parameters are bundled into the "parametric feature" definition. For example; a sketch with an extrusion distance creates a primitive. Constraints control the size and position of the new primitive. Then a Boolean function is added to specify whether the primitive will be added or removed from the parent geometry. The Boolean function with the primitive and related parameters are passed to the kernel and the resulting geometry is processed and revealed. This kernel function is processed every time this "feature" is regenerated or reprocessed. The kernel function along with its required parameters is very specific to the kernel as discussed above. It is highly likely that another kernel will not understand this very specific function, and even if it did the geometrical results could be very different.

Moving this TOP layer from one kernel to another kernel is very much like taking a FORTRAN program and trying to compile it with a C compiler. It won't work. The functions and related parameters are just not compatible. As such these functions that are recorded in the history tree have to be translated to work on a different kernel. The CAD industry has made several attempts to make translators that will translate this TOP layer from one kernel to another - but we have had very limited success. By the way, this is also one of the reasons for slow progress on geometry kernels - we are too locked into this TOP layer. If we make too many changes to the kernel we may impact upward 
compatibility with previous version "history trees", i.e. “kernel functions”. There have been many examples of this problem over the history of CAD.

Moving the TOP layer from kernel to kernel may provide the highest risk to product development. This translation has to work perfect otherwise history/design intent can be lost. By the way, if you do get a complete and robust translation of this TOP layer from one kernel format to another, you don't need to be concerned with the BOTTOM layer. The geometry will be recreated for you when the translated TOP layer is processed by the receiving kernel.

Now let’s consider the BOTTOM layer, i.e. the geometry (and for you parametric modeling users that were able to get an accurate translation of the TOP layer, you can skip this section - unless your TOP layer has a lot of import or non-ordered geometry features in it). As I mentioned earlier, geometry kernels can create geometry in a variety of different forms and at a variety of different resolutions (accuracy). Although we have industry exchange standards for geometry (IGES, STEP), as mentioned above we don't have standards for geometry accuracy or definition. Translation of geometry from one kernel to another is not flawless. You most likely have experienced this imperfection if you have ever translated geometry through the STEP or IGES formats.

If you have experienced errors or gaps in translated geometry it could be a result of, for example, translating an analytic surface to a NURBS surface, or vice-versa. Or it could be a geometry accuracy problem. In some cases the translation issues may be related to the IGES or STEP processors within the CAD tool, but it’s also possible that the issues are due to big differences in the sending and receiving geometry kernels. We all likely know how poor geometry translations can impact the product development process. When moving CAD geometry from one kernel to another, you can expect some of this. Fortunately most modern CAD systems have tools to quickly resolve these geometry imperfections - don't they?

So how might a kernel change impact your product development process? Well, it depends. Will you be transferring the TOP layer or the BOTTOM layer with the kernel change? I hope you can begin to see the pro's and con's of each.Here are few final questions to ponder:
  • What do you think happens if you use one geometry kernel for concept design, and another for detail design? Can you "round trip" without loosing data or degenerating the model?
  • How robust is your TOP layer? Do you maintain strict modeling standards?
  • What geometry accuracy does your geometry kernel run at? Does it make high quality geometry that other kernels can consume without error?
  • Are your drawings associated to the TOP layer or the BOTTOM layer?
  • How is "model-based" impacted when multiple geometry kernels are used in the product development process?
  • What downstream functions are driven by the TOP layer versus the BOTTOM layer?
Paul

Tuesday, January 15, 2013

Model-Based Engineering – Are We There Yet? (Part II)

  Part II    (Part I)

I've had many opportunities to conduct product development process assessments at some very large companies. In one such case I visited 4 or 5 different facilities scattered around the US. Interviews consisted of about 70 or 80 different people; VP's, Directors, Managers, and key contributors to the process. When I conduct these assessments I pay close attention to the input and output of each stage in the process. The output from one stage is of course the input to the next stage – in some form or another. Or, of course, that’s the way it should work. Then I like to look at the individual activities, methods and tools used to support that particular stage of the process, also noting where and how data is managed as it flows through the process stage.

During this particular assessment we identified over one hundred different software tools that were being used in the development and manufacture of their products. This includes authoring tools, analyzing tools, calculation tools, optimizing tools, documenting tools, managing tools, rendering tools, publication tools, presentation tools, communication tools, and so on. What is interesting to consider is that most of the tools that were actually used to create product specific data were creating a unique file type. And in many cases these files (documents) were not directly consumable by any of the other tools that may be on the “input” or “receiving” end of the data. Many of these documents were disconnected islands that required manual updating. My experience tells me that this situation is not too uncommon.

Of course multiple documents are a reality in any product development environment. A universal PLM/PDM environment will be critical in bringing these many documents together into one master. But there is also value in reducing the number of "documents". A tools consolidation initiative may be an important step towards "Model-Based". Considering the number of different tools used in the development of the product, how many disconnects could there be between actual product requirements and manufacturing. Disconnects always lead to standalone pieces of data, manual data entry and duplication of effort. Does your environment support "Model-Based"?

Here’s another interesting example. I've worked with many companies that can rarely ever generate a model based top level engineering BOM of their product. The BOMs are typically manually created. Keeping the manufacturing BOM up to date and accurate is likely tedious with much manual interaction. Unfortunately this situation is not too uncommon either. Are your BOM’s "Model-Based"?

There are many reasons that companies may not be able to generate a model based top level e-BOM. The assemblies could be so large that they simply can't load the entire assembly and scan it. But there can be other reasons. I see many examples of "overloaded" assemblies. Assemblies that are packed full of much information and detail that adds no value to the design, manufacture, or service of the actual product. Here is an example of what I mean by "overloaded".  Perhaps you are in the industrial equipment business. Your products are likely packed full of purchased components. In the interest of completeness and quality you request detailed 3D models from your suppliers and integrate them as-in into your design. If you are not careful in this integration process you can quickly and unnecessarily double and even triple the complexity and size of your assemblies.

Getting a model based top level e-BOM could be a fundamental step in making progress towards "Model-Based". As such, one of the first steps towards "Model-Based" could be to optimize the master through some best practices around data reduction and data integration. In the end, “Model-Based” demands a rich master. If the master is already too full and unnecessarily complex - you have a problem. Is your master optimized for "Model-Based"?

One last example. I've experienced a few companies where the end product is a highly stylized, ergonomic, creative, and attractive product. The product is more “touchy-feely” than highly engineered. In many of these cases the majority of all intellectual property is defined by the “Artist”, i.e. the Industrial Designer. It’s the job of the “Detail” Designer to make the product manufacture-able. In this situation, the design master is made up of the early concepts and aesthetic requirements. If a design change happens it almost always goes back to the artist. The change then ripples back through all the downstream stages. Can you guess how many times the detail designer might have to create and/or re-create the as-manufactured CAD model? Are your detailed 3D CAD models “Model-Based”?

I know I have some strange views on what “Model-Based” could be. Perhaps I'm completely off-base, but I hope I have stretched your thoughts on the topic just a bit. Let me know what you think. Feel free to add your comments below.

Paul

Monday, December 17, 2012

Model-Based Engineering, Are We There Yet?

  Part I

As I‘ve mentioned before in several other posts, I’ve had the pleasure of visiting many design and manufacturing companies over the last 20 or so years, sometimes several per month. With some of these visits I even get the pleasure of conducting full product development process assessments. With these assessments I get a first-hand detailed view of how companies design products, from requirements to manufacturing. I also get a glimpse of what their vision might be for the future of their design processes and where they see high-value opportunities in process improvement.

In a previous blog post from 2009 titled “Model Based Definition(MBD) – What’s the Hold-Up?” I gave my early views on the topic and asked my readers a few questions. It is still the most read article on my blog. Interest in this topic remains very high.

There is one consistency I find with any discussion I have regarding “Model-Based” and that is that everyone seems to have a different idea of what it means.  Do we refer to it as Model-Based “Engineering” or “Design” or “Definition” or what? And what's the difference? And what does the term “Model” really refer to?  I have found, however, that if we translate all of this “Model-Based” discussion and debate down to the product development process, what most process experts/owners will agree to is that they need to somehow:

1.       Eliminate the disconnects
2.       Remove the redundancies
3.       And move to a single product definition master
If this Model-Based “whatever” can help them with those three “opportunities”, they want it.
And I think it can, but we have to be in sync on what the real goal is. If “Model-Based” somehow refers to one or more of the three opportunities listed above, we have to know where these conditions may exist, where the priorities are, and what the vision is.
It was over 30 years ago that the CAD industry presented a solution to disconnects and redundancies in the process. They called it “associativity” (which, by the way, is still not a word). Everything was supposed to be associative to the 3D model, i.e. the master model, (an old term). If the model changed, all downstream deliverables would update. Create data once and leverage it to the max. Was this not enough, or did it just not work? Based on my experiences, even the most sophisticated of companies still have many, and I mean many, disconnects, redundancies AND master documents in their processes. When I ask a company what they consider to be their “engineering” master, (and I’m not just referring to the detailed mechanical engineering master) they may have trouble answering that question. And of course, their “engineering” master and “manufacturing” master are at least two different things.
For many people “MBD” refers to the merger of key characteristics of the manufacturing drawing with a rich 3D model, thus eliminating or greatly reducing the need for the detailed manufacturing drawing. But, what’s the big deal with fully annotated and detailed 2D manufacturing drawings? What's the value of merging this data into the 3D model? Let’s see how this stacks up against the three opportunities.
Disconnects:
Most drawings today can be considered “Model-Based” in that they are extracted from a 3D model and are in most cases associated to the same. So in this case we should have no disconnect as the drawings are already "Model-Based".
Redundancies:
It is possible that some of the work done to fully annotate the drawing is redundant to the work that was done when creating the 3D model? But does PMI and 3D annotation eliminate this redundancy or just move the activity to a different "document"? It is possible that we may be able to eliminate some standard dimensions/tolerances by better leveraging the 3D model. So there will be some value here.
The Single Master Document
Perhaps the most significant value comes via the merger of one of the engineering masters (the 3D model) with one of the manufacturing masters (the 2D drawing). There is always high value in eliminating even one document in the process, and this is a big one. The big question with this is; how consumable is this rich document (the 3D model) to downstream functions. Functions such as; Tech Pubs, Mfg., CAM, Tooling, Process Sheets, and the tricky one – QA. We are getting better at this, but still much progress to be made.
Is that all there is to “Model-Based”? Merging the manufacturing drawing into the rich 3D model? On the contrary, in my humble opinion this particular merger is perhaps one of the higher cost, lower value fruits that can be realized through a “Model-Based” initiative. Perhaps it’s a reasonable, although complex, place to start, but don’t let it stop there. There is so many other high value opportunities with “Model-Based”.
In Part II of this post I want to share some other areas in the product development process where “Model-Based” can bring huge value, and in many cases at very low cost. If you have been able to eliminate or greatly reduce the number of 2D manufacturing documents being produced you have made great progress, no doubt. But I can almost guarantee you that even after all that work you most likely still:
1.       have not eliminated all the disconnects
2.       have not eliminated redundancies
3.       and do not have one single product definition master
Paul