Tuesday, July 9, 2013

What is current best practice in part numbering systems?

What is the goal of part numbering system?

The goal of any part numbering system is to uniquely identify the component approved for a specific application. Accurate, consistent, unambiguous part identification is essential for correct product assembly, testing and maintenance.

Part numbering schemes

Typically companies use one of the following types of part numbering schemes and being in the industry for a decade, I have shared common advantages and disadvantages below for all the schemes.

Intelligent Traditional part numbering systems and document identification schemes originated over 50 years ago. At the time, a basic consideration was that unstructured information was very difficult to find, and it was therefore necessary to overload document identifiers and part numbers with search-related "helper" data.
In other words by looking at the P/N one should be able to have an idea of what the item is. In this case the P/N really becomes the description of the particular part (or assembly) and usually each digit in the Part Number has a particular meaning.
With this type of scheme, a part number generated for a capacitor might be for example  CS-100-03, where “CS” stands for capacitor, “100” is the capacitance in ohm and “003” is the serialized suffix.
There's an almost interesting fascination, with designing the perfect part numbering system. Everyone starts by envisioning how convenient it would be to tell, at a glance, the important characteristics of a part, or the document number that describes the part.

Advantages of intelligent part numbering

Significant part numbers offer time savings downstream, and they can help prevent data entry mistakes and improve manufacturing efficiency. Here’s how:
·         Searching efficiency: With Intelligent part number you can search for and find particular types of parts from just the part number. Searching for *CS* might show you all the capacitors that a company has assigned part numbers to for example. This is really handy if you are designing something and need some particular capacitor in the design. You have to find out if the capacitor you want to use already exists in the company or if you need a new number. By searching on the part number(“CS”) with some wild card characters it’s easy to see where all the capacitors and are they already released
·         Reduction in error: Descriptive part numbers specify the group to which every part belongs, so you can immediately see when a part is in the wrong group.
·         Process improvements: Because parts with similar naming conventions are all handled the same way, you can predefine the change routings, review processes and manufacturing steps for each part number class or category.

Disadvantages of intelligent part numbering

Folks couldn't decide if a washer that was being used to cover a hole in a water tank should be given the P/N for hardware (nut's screws, washers) or should it be given the P/N for hydraulic components (hole plugs, pipe caps, fittings, valves) for example. "it's a flat washer but we are using it as a hole plug, what number does it get?" A group of three or four people standing around for an hour reading and discussing the companies part number system rules and procedures and trying to figure out what the P/N should be for a $1.50 part. Other disadvantages are :
·         Training and knowledge required: Mistakes are often made in encoding the part description into the number, some of which can be quite costly. The engineers who "know the code" use the part number, everyone else reads the part description. If a part's number says "10 amp fuse" while the description is "1.0 amp fuse", a lot of product can get shipped or serviced with the wrong fuse.
·         Error Prone: You can never simply ignore an incorrect part assignment. Otherwise, the category number becomes an unreliable indicator of its content. An aluminum casting that's accidentally assigned to the steel castings category requires an engineering change and full dispositioning. And the error becomes more painful if your number has been cast, engraved, etched or printed on the part. If there are a series of items (O-rings, screw lengths, resistors) with a common base number, one bad part assignment may block a future assignment, and therefore require that the entire series be renumbered.
·         Inefficiencies: You may need a specialist to handle most part numbering if you use a significant scheme. In this case, a single person or group can become a bottleneck. And pulling a part number may require time and discussion, which slows down the design process.

Non-intelligent – On the other side of the battle field is the army of people that say that a Part Number should just be a number, any number, that is just unique to an item and doesn't tell you anything about what the part is. Also referred to as “non-significant.” (not descriptive), all numeric and as short as possible .Non-significant part numbers are typically serial (pulled in numerical order), regardless of the type of part.
Everything has a name and a number associated with it within a company, even the employees. Have you ever cared what the employee number is of a co-worker? Probably not, you are more likely to care about their name. The same logic follows when talking about part numbers in an Insignificant Part Numbering System. The number a particular part has isn't important in an Non-significant Part numbering system, but the description (or name) is. Numbers are usually assigned in numerical order by a specialized computer program like a Product Life Cycle Management (PLM) system. That's the thinking with non-significant part numbers.
Using this part numbering system, a resistor could be assigned part number “P1000012” any unique identifier.

Advantages of non-intelligent part numbering

Using this type of scheme will save your organization time upfront. You can ramp new employees quickly, avoid relying too heavily on any one person and maintain the system without much overhead. Here’s how:
·         Faster to assign: It takes little to no time to pull a sequential number for an item. Assigning a part number can happen fast.
·         Little efforts needed: Easier data migration efforts for legacy to new system. Easier Merger & Acquisition efforts.If your organization hires new employees they will not need to learn how to define a part number and can focus their attention on other tasks. Assigning a new part number can happen with minimal training.
·         Simple maintenance: It is easy to maintain this type of scheme, as it’s essentially a sequential list! You will not have to decide where and how a new part fits into your scheme.

Disadvantages of non-intelligent part numbering

Using a non-significant part numbering scheme isn’t completely error-proof; mistakes can happen, especially if data entry is involved, and managing similar parts can be difficult. Here’s why:
·         Requires a business system to search parts: Because it doesn’t have meaning, a non-significant part number does not provide any cues to help a user evaluate a part. In order to navigate through spreadsheets with randomized part numbers, you need a system that can search for parts based on description, name, size or other relevant attribute .
·         Error prone : With insignificant Part Numbers you have to search in the description field to find all the wire that the company is using. This can be OK if the description fields are all the same, like all wire used in the company has the word "wire" in the description but that isn't always the case. Description fields in most Part Number databases are limited to some small number of characters like 40 for example so the tendency is to abbreviate. "wire" "cable" "condt" "harnss" might all be in the descriptions so if you are looking for wire it might not be that easy to find with Insignificant Part Numbers. Only if strict discipline and rules are established for how to name things can Insignificant Part Number Systems be used successfully

Semi-Intelligent or Hybrid Systems - One way that I have seen this work well is using part category(eg Commodity Codes). Where the business rules are fundamentally coupled to the physical parts. For instance, in the vast majority of cases an "electronic component" number will be quite sufficient, and there's no need to create separate Resistor, Capacitor and Diode commodity code prefixes unless the physical parts are — and always will be — treated quite differently from one another. These parts types, even with separate and unique custom attributes, can all share the same part number format. Since these part categories represent clear and unchanging attributes, you'll want to keep the numbering groups very large.Another way of looking at semi intelligent part numbering system is using class codes to categorize the parts.

Of course, depending on how your company profile and product categories , part numbering system will defer Your business operations can also influence which part numbering system to use. An original equipment manufacturer (OEM) that creates its own part numbers has entirely different business needs than a company downstream in the supply chain.

Also consider your business tools. Will your current system support your part numbering needs? Can you maintain your part number system with your existing tool or will your part number system be more effective by bringing another system into the mix?

Here are some references from experts
 Clement, et al.: Manufacturing Data Structures, page 20:
Another important point about item numbers is that they should be as short as possible. Part numbers are keyed, copied and used as verbal identifiers. The shorter the numbers, the more accurate people can be. Obviously, the greater the number of digits in a part number, the greater chance of error. We also recommend that only numeric digits be used.

Garwood: Bills of Materials: Structured for Excellence, page 73 (author's emphasis):
The solution...is to use shorter non-significant part numbers. We have found that part numbers of 5 or 6 digits are the most effective.

Watts: Engineering Documentation Control Handbook, page 49:
The most critical of these issues is that, over time, the significant numbering systems tend to break down. ... As time passes, variations arise which were not foreseen. One digit was set aside where two are now needed. Significant numbers thus tend to lose their significance. They no longer do the classification coding function intended by their inventors.

Friday, July 8, 2011

PTC System Monitor

Most of the developers/Administrators struggle to troubleshoot a problem within the system coming either from a badly configured sql queries or custom code. How about tracing a performance issue caused by a valid unknown sql query from Windchill (user defined reports or custom code)? The current monitoring system has limited capability to point the module/method/class/query which eating up maximum resources on the system causing system terribly slow. This is also possible in case of complex system configuration where time required for investigating the issue is huge.

During the PlanetPTC conference this year, PTC had major announcement by introducing a new module call PSM (PTC System Monitor) which will be shipped at no cost along with PTC Windchill Product after October 2011.

PSM is a separate server-client system powered by dynaTrace that monitors Windchill application. dynaTrace has emerged as the new leader in application performance management. Here are some silent features:

  1. Reduce Turnaround Time and Escalations: Based on the PTC Experience with this tool, it is estimated that the turnaround time to troubleshoot an issue with this tool will be 10 times faster than the current one.
  2. Easier Monitoring and Diagnosis: This tool is packed with multiple dashboards, incidents and alerts, built-in analysis tools. Runtime data can be cached and stored automatically. This is very useful in case of escalating call to PTC tech support where we need to exchange the information to the PTC. This tool allows us to automatically record Sessions and export to PTC support team.

PSM is a heavy weight server-client system which requires its own database to be configured. However it there is flexibility to configure the PSM system on remote machine other than the PTC Server.

PSM system is based on Agent, Sensors and Measures which are inserted logically within the Windchill system. Agent’s gathers monitoring (cpu, memory, threads) and diagnostic information (java stack, arguments, SQL statements, etc) to send to the collector. Sensors are Entry Points or measurement points into the Java stack. Sensors recognizes the java inheritance concept hence not all the classes required sensors. The specification for sensor is basically done by developers which allow PSM to monitor piece of code and its performance. PTC has already developed sensors for all the major API’s. If developer needs to put a sensor into code, the code has to inherit the sensor specification provided by tool so that the custom code can be monitored through PSM.

PSM is also capable of producing the reports which will briefly explain the overall health of the Windchill system. PSM stays on even if Windchill is down. This feature is interesting in terms of reducing the Admin efforts to manage/create the reports based on current system monitor tool.

Currently this tool is planned for Windchill platform. Based on interaction with PTC folks they are also planning for FlexPLM release; however I am not aware about the exact date. For Windchill tool will be available for Windchill 9.1 and 10.0 users after October 2011.

I am really impressed by this tool as this will help developers/Admins to reduce the time of investigation put on monitoring the system and trace to exact point from where the issue is coming. It has a visibility and insight into Windchill application. It also helps developers to optimize their development efforts to maximize the user experience and release product faster.

Monday, June 27, 2011

PLM Evoluation and Challenges

PLM initially know as PDM, Product Data management is evolved from CAD applications. Manufacturing companies started using CAD for shortening the lifecycle of Product development. Once the CAD tools are used in the manufacturing process, the challenge is how to communicate different divisions and how to maintain the data produced by CAD tools. This brought the concept of Product data Management, Which is nothing but having central database for all the CAD Data and using the workflow process to access it, review it and modify it.

Initial PDM applications are mainly focused on content data, i.e, and the data that is relevant to CAD. Later needed the little bit of metadata requirements, like Part Name, Part Number, and Reviewer etc. In Windchill world these defined as static fields in the data model. Later on, concept called Instance based attributes (IBAs) to extend the metadata requirement. Fields you can define on the instance/ type. These types can be configured in the system. This is main concept for FlexPLM data model, as the Retail,Footwear/Apparel PLM process has more metadata. Concept wise it’s same; however the implementation of data model is different. Taking the Windchill main traits like iterated, versioned, folder etc and extending it to next level on top of it, defining all the relations like Product, Product-Season, Product-Season-Source, Product-Season-Source-Specification, Product-Season-Source-Specification-BOM, made a big performance impact. Because of this complex relations FlexPLM product is becoming just transactional application rather than full fledged system where you can run margin reports, where used reports or forecast reports, and to extract/import the data. The solution to this problem will be either PTC needs to simplify their data model or tightly integrated report tools.

Tuesday, June 21, 2011

@ JVM memory management


 I would like discuss about the topic of JVM memory management for Windchill or FlexPLM application.
Tomcat /JVM memory allocation plays a very significant role in determining Windchill/FlexPLM Performance.
JVM  memory is divided broadly into Heap & non-Heap memory usage.What is Heap Memory?: Java VM allocates runtime data of all class instance and arrays to heap memory.
Heap memory is further divided in to three subtypes
1.    Eden Space/young Generation
2.    Survivor Space
3.    Old /Tenured Generation
Eden space/young generation:  is a pool where the memory is allocated when any object is initialized.
Survivor Space: is a pool where the memory is allocated for certain short lived objects (survived after Eden space garbage collection)
Old /Tenured Generation: This pool contains the long lived object (survived for sometime in Survivor Space).
Later in this article would talk about allocating Heap memory and percentage allocation for each of this Pool. Let me first clarify Non- heap memory.
Non-heap memory: This pool holds all the data related to threads, constructors, methods & related class structures.
Non-heap memory is further divided in to two subtypes
1.    Permanent Generation(Perm Gen.):
2.    Code cache
Permanent Generation (Perm Gen.): The pool holds the reflective data of the virtual machine (class & methods)
Code cache: Memory is utilized for compilation data, code data & package information.
Heap memory allocation for JVM’s :
1.We need to monitor the Heap memory usage and the heap memory requirement for the application,
It is always good to keep the initial heap memory & max heap memory size at same value.
2. In applications like Windchill, its better to allocate 40% of heap memory to Eden space/Young generation – initial size and keep the Survivor ratio as 8,
so that most of objects are collected and cleared in Eden Space and Survivor Space before getting collected in tenured Space (else it might happen Tenured /old gen. space fills up quickly)
Please specify the following options for all the JVM’s(Tomcat & Windchill method Server) so that Garbage Collection efficient and effective.
1.    Disable Explicit GC.
2.    Use parallel GC
Now coming to Non-heap memory allocation,
 We can define Max Perm Size at around 20% of max heap memory and
 Initial Perm size is 10% of max heap memory.

In my future blogs, I would like to take you through different tools/methodologies used for Performance monitoring and capturing the data, analysis the logs to identify the root causes.

Monday, June 6, 2011

PLM for Engineering Procurement Construction (EPC)

In the EPC, the contractor designs the installation, procure the necessary materials and construct it. The contractor carries the project risk for schedule as well as budget depending on the agreed scope of work. Some EPC companies doing business from proposal, design, construction, start-up, operation and decommissioning of industrial plants, in few words the whole Plant Lifecycle.

The ultimate “owners” of the projects in many cases have a different set of priorities. The problem is that every owner has different needs and wants to make changes to a “standard” plant concept.

Challenges of turnkey projects in EPC business

· Proposal

· Completion of Design

· Procurement of Equipment

· Logistics

· Managing the designer’s and contractor’s risk

· Dealing with regulators, Authorities, Third-Parties

On many complex projects, particularly in the areas of mining, manufacturing, power, nuclear and process, the cost and delivery challenges in the procurement of specialized equipment present the greatest risk to the project

While selecting the PLM application for EPC business, implementer and owner considers Engineer-Procure-Construct phases and respective stakeholders of the plant.

EPC business works on two aspects Products, which develops the equipments and Projects, which manage erection, maintenance of the plant. Projects handle more risks and ready to develop new equipments based on the plant requirement.

Leading PLM applications and their integration with CAD, ERP and environment compliance have the features to manage EPC business, still some gaps in PLM application I experienced for EPC are

1. Some CAD tools yet to integrate with PLM.

2. Design automation and PLM integration

3. Project management is not efficient or user friendly for

a. Import/Export feature

b. Alerts and follow-ups

c. Resource Management

d. Budget and costing

4. CAE tool and PLM integration

5. BOM Export / Import is not efficient

6. ERP and PLM integration is costly in terms of software license and skill set

7. Difficulties in Legacy system integration

Each of the above is very important for respective stakeholders. Development of PLM started with document management for engineering and moved towards change, collaboration and configuration management. I feel the gaps can be bridged when PLM product definition group, EPC experts and PLM product technician shake their hands.

Friday, May 27, 2011

User Experience Testing


         I was given the task to develop a complex report once. It had all the funky layout and lots of data pulled from various business objects across the system.Project plan showed a 60 man days of costly effort.Development started and finished in time.It was shown a green flag after a couple of  user acceptance testing rounds and sent to the battle field (Production server).
There were no issues reported for the report an year and one day it was told that the end users are no longer using the report because it does not fit their needs. It contains a lot of extra information and it is difficult to use. What went wrong ? The report passed the UAT. so what ? there are a lot of softwares in the market which pass it.What is missing here is user experience testing.Although the users accepted the software initially because they were new to it.
        Its like you test drive the car and then you decide to buy it , and only after a month of driving experience you come to know about the true problems of the car. Even the big brands have failed products apart from being of high quality.
        Hence there is a need for a framework or a software methodology for merging the user experience testing (UET) to be a part of software development.UET is all about measuring the feelings of the end users  while they experience  the software over a period of time.
I have been hearing the term "Minimize User Clicks" over a period of time and the development teams  take it as a misnomer for a successful UET. But does it really help in having a good software experience , absolutely not. I am sure there are methodologies and testing frameworks for this but its who uses them effectively makes the difference and since PLM is about the product , hence , the PLM system as a software should have a good UET framework , as well as the PLM software itself should have a module to measure the experience of the end  users.
   As an example this is what i could think of in FlexPLM. Suppose, we create a product called 1234 in season FW 2011 . And the carry over of 1234 to season FW2012 should depend of the UET results of 1234 for season FW 2011.But before making a conclusion , i would like to ask this question - What is the basis of carrying over a product from one season to another ? Anyone ?