From the category archives:

Trends

smart-products-bom

We live in the era of smart products. Modern smartphones is a good confirmation to that. The average person today keeps in his pocket a computer with computational capability equal or even more than computer that aerospace and defense industry used for navigation. In addition to that, you smartphone has communication capability (Wi-Fi and Bluetooth) which makes it even more powerful. If you think about cost and availability of boards like raspberry pi and Arduino, you can understand why and how it revolutionize many products these days. Although, wide spread of these devices has drawbacks.

Smart products are bringing a new level of complexity everywhere. It starts from  engineering and manufacturing where you need to deal with complex multidisciplinary issues related to combination of mechanical, electronic and software pieces. The last one is a critical addition to product information. Bill of materials has to cover not only mechanical and electronic parts, but also software elements.

Another aspect is related to operation of all smart products. Because of connectivity aspects of products, the operation is required to deal with software, data and other elements that can easy turn your manufacturing company into web operational facility with servers, databases, etc.

As soon as devices are exposed to software, the problem of software component traceability is getting critical. Configuration management and updates is a starting point. But, it quickly coming down to security, which is very critical today.

GCN article – How secure are your open-source based systems?  speaks about problem of security in open source software. Here is my favorite passage:

According to Gartner, 95 percent of all mainstream IT organizations will leverage some element of open source software – directly or indirectly – within their mission-critical IT systems in 2015. And in an analysis of more than 5,300 enterprise applications uploaded to its platform in the fall of 2014, Veracode, a security firm that runs a cloud-based vulnerability scanning service, found that third-party components introduce an average of 24 known vulnerabilities into each web application.

To address this escalating risk in the software supply chain, industry groups such as The Open Web Application Security Project, PCI Security Standards Council and Financial Services Information Sharing and Analysis Center now require explicit policies and controls to govern the use of components.

Smart products are also leveraging open source software. The security of connected devices and smart product is a serious problem to handle. Which brings me to think about how hardware manufacturing companies can trace software elements and protect their products from a potential vulnerability.

What is my conclusion? To cover all aspects of product information including software becomes absolutely important. For many manufacturing companies the information about mechanical, electronic and software components is siloed in different data management systems. In my 2015 PLM trends article, I mentioned the importance of new tools capable to manage multidisciplinary product information. Software BOM security is just one example of the trend. The demand to provide systems able to handle all aspect of product BOM is increasing. Just my thoughts…

Best, Oleg

photo credit: JulianBleecker via photopin cc

Share

6 comments

cloudpdm-shadow

An interest of customers in cloud PDM solution is growing. I guess there are multiple factors here – awareness about cloud efficiency and transparency, less concern about cloud security and improved speed and stability of internet connections. If you are not following my blog, you can catch up on my older blog articles about cloud PDM – Cloud PDM ban lifted. What next?; Cloud PDM hack with Google Drive and other tools; Cloud can make file check-in and check-out obsolete. The confluence of new technologies around cloud, web, mobile and global manufacturing is creating a demand for cloud (or web based) solution helping distributed design teams.

So, where is a challenge for cloud PDM? My hunch, the biggest one is how to sell cloud PDM to manufacturing companies. I can divide all customers into two groups – larger manufacturing companies that already implemented PDM solutions and smaller manufacturing firms that are still managing CAD design with folders, FTP and Dropbox accounts.

Analysts, researchers and PDM marketing pundits are trying to convince companies that cloud PDM can become a great enabler for collaboration and leaving CAD data “not managed” can bring even greater risk to organization. There is nothing wrong with that… PDM was build around the idea of how to take a control over data. However, the idea of “control” is not something engineers like. Ed Lopategui is speaking about engineers and control in his last blog – The day the strength of PDM failed. Here is a passage I liked:

The second reason, which is not so legitimate, is a loss of control. The reason so many engineers pine about the days of paper-based PDM in document control departments (or instead nothing at all) is that world could be circumvented in a pinch. It was flawed because it was run by humans, and consequently also replete with errors. Replaced with immutable and uncaring software, engineers working in groups nonetheless become irritated because they can’t just do whatever they want. You see this very conflict happening with regard to source control in software development circles. The order needed to manage a complex product necessarily makes manipulating pieces of that engineering more cumbersome. It’s one thing to be creating some widget in a freelance environment, it’s another matter entirely when that end product needs traceable configuration for a serialized certification basis. And that will happen regardless of how the software operates.

Here is the thing… Maybe cloud PDM should stop worry about controlling data and think more about how to bring a comfort to engineers and stop irritating users with complex lifecycle scenarios? It made me think about practice that known as “shadow IT”. For the last few years, shadow IT and cloud services have lot of things in common. Don’t think about shadow IT as a bad thing. Think about innovation shadow IT can bring to organizations.

Forbes article “Is shadow IT a runaway train or an innovation engine?“ speaks about how shadow IT can inject some innovative thinking into organization. This is my favorite passage:

As we reported last month, one corporate employee survey found that 24% admit they have purchased and/or deployed a cloud application — such as Salesforce.com, Concur, Workday, DropBox, or DocuSign. One in five even use these services without the knowledge of their IT departments.

The rise of shadow IT may actually inject a healthy dose of innovative thinking into organizations, at a time they need it most. The ability to test new approaches to business problems, and to run with new ideas, is vital to employees at all levels. If they are encumbered by the need for permissions, or for budget approvals to get to the technology they need, things will get mired down. Plus, shadow IT applications are often far cheaper than attempting to build or purchase similar capabilities through IT. 

What is my conclusion?  Stop controlling data and bring a freedom of design work back to engineers. I understand, it is easy to say, but very hard to implement. To control data is a very fundamental PDM behavior. To re-imagining it require some innovative thinking. It is also related to the fact how to stop asking engineers to check-in, check-out and copy files between different locations. Maybe, this is an innovation folks at Onshape are coming with? I don’t know. In my view, cloud PDM tools have the opportunity to change the way engineers are working with CAD data. Many new services became successful by providing cloud applications and making existing working practices much easier than before. Just my thoughts…

Best, Oleg

photo credit: Dean Hochman via photopin cc

Share

2 comments

engineers-plm-brain

Computers are changing the way we work. It is probably too broad statement. But if I think about the fact today is Friday afternoon, it should be fine :) . I want to take a bit futuristic perspective today. Google, internet and computing are good reason why our everyday habits today are different from what we had 10 years ago. Back in the beginning of 2000s we’ve been buying paper maps before going on vacation and kept paper books with phone numbers of people we need. Look how is it different now. Maybe we still need to make a hotel reservation before the trip, but most of the thing we do can be achievable online via internet and mobile devices.

A month ago, I posted about connecting digital and physical entities. I was inspired by Jeff Kowalski presentation at AU 2014. You can get a transcript and video by navigating to the following link. The idea of machine learning and “training” computer brain to find an optimal design is inspiring. The following passage from Kowalski’s presentation is a key in my view:

 …we’re working on ways to better understand and navigate existing solutions that might be relevant to your next design project. Using machine learning algorithms, we can now discover patterns inherent in huge collections of millions of 3D models. In short, we can now discover and expose the content and context of all the current designs, for all the next designs. Taxonomies are based on organizing things with shared characteristics. But they don’t really concern themselves with the relationships those things have with other types of things — something we could call context. Adding context reveals not only what things are, but also expresses what they’re for, what they do, and how they work.

Nature explores all of the solutions that optimize performance for a given environment — what we call evolution. We need to do the same thing with our designs. But first we have to stop “telling the computer what to do,” and instead, start “telling the computer what we want to achieve.” With Generative Design, by giving the computer a set of parameters that express your overall goals, the system will use algorithms to explore all of the best possible permutations of a solution through successive generations, until the best one is found.

Another time, I’ve was recently thinking about artificial intelligence, machine learning and self-organized systems was my article – How PLM can build itself using AI technologies. The idea of The Grid that allows to self organize website based on a set of input parameters and content learning is interesting. It made me think about future PLM system that self-define system behaviors based on the capturing of information and processes from a manufacturing company.

The article Google search will be your brain put another interesting perspective on the evolution of computer and information system. Take some time over the weekend and read the article. The story of neural nets is fascinating and if you think about a potential to train the net with the knowledge of design, it can help to capture requirements and design commands in the future. Here is an interesting passage explaining how neural nets are working from the article:

Neural nets are modeled on the way biological brains learn. When you attempt a new task, a certain set of neurons will fire. You observe the results, and in subsequent trials your brain uses feedback to adjust which neurons get activated. Over time, the connections between some pairs of neurons grow stronger and other links weaken, laying the foundation of a memory.

A neural net essentially replicates this process in code. But instead of duplicating the dazzlingly complex tangle of neurons in a human brain, a neural net, which is much smaller, has its neurons organized neatly into layers. In the first layer (or first few layers) are feature detectors, a computational version of the human senses. When a computer feeds input into a neural net—say, a database of images, sounds or text files—the system learns what those files are by detecting the presence or absence of what it determines as key features in them.

So, who knows… maybe in a not very far future CAD and PLM systems will be providing a specific search based experience helping engineers to design and manufacturing in a completely different way.

What is my conclusion? While it still sounds like a dream, I can see some potential in making design work looks similar to search for an optimal solution with specific constraints and parameters. A well trained algorithm can do the work in the future. Just thinking about that can fire so many questions – how long will take to train the net, what will be a role of engineers in the future design and many others. But these are just my thoughts… Maybe it will inspire you too. Have a great weekend!

Best, Oleg

Share

0 comments

Top 5 PLM trends to watch in 2015

January 15, 2015

Holidays are over and it was a good time to think about what you can expect in engineering and manufacturing software related to PLM in coming year. You probably had a chance to listen to my 2015 PLM predictions podcast few months ago. If you missed that, here is the link. Today I want to […]

Share
Read the full article →

How many enterprise PLM systems will survive cloud migration

January 14, 2015

Cloud adoption is growing. For most of existing PLM vendors it means to think about how to migrate existing platforms and applications to the cloud. I covered related activities of PLM vendors in my previous articles. Take a look here – PLM cloud options and 2014 SaaS survey. It can give you an entry point […]

Share
Read the full article →

Utility and future PLM licensing models

January 13, 2015

Razorleaf article More PLM Licensing models made me think about business models and licensing transformation that happening these days in engineering and manufacturing industry. I guess, we knew changes are coming… Back in 2012 I shared some of my thoughts about PLM Cloud and Software Licensing Transformation. In a bit different perspective I’ve been discussed future […]

Share
Read the full article →

Why today’s CAD & PLM tools won’t become future platforms?

January 12, 2015

PLM business and software vendors are transforming. Manufacturing companies are looking for new type of solutions that can give a faster ROI as well as become a better place for engineering and manufacturing innovation. The dissatisfaction of customers about slow ROI and low value proposition is growing. Back in 2012 I was listening to Boeing […]

Share
Read the full article →

The demand for PLM services

January 8, 2015

Services is an important part of every PLM implementation. My attention caught news article – Kalypso and GoEngineer form strategic partnership. I found it interesting, especially the following passage: “The Kalypso-GoEngineer partnership enables both firms to scale our businesses to better serve the growing demand for PLM services and software,” said George Young, CEO of […]

Share
Read the full article →

What stops manufacturing from entering into DaaS bright future?

January 7, 2015

There are lot of changes in manufacturing eco-system these days. You probably heard about many of them. Changes are coming as a result of many factors – physical production environment, IP ownership, cloud IT infrastructure, connected products, changes in demand model and mass customization. The last one is interesting. The time when manufacturing was presented […]

Share
Read the full article →

How PLM can “build itself” using artificial intelligence technologies

January 6, 2015

I had a chance to visit The Art of Brick exhibition in Boston’s Faneuil Hall Museum few days ago. If you following me on social media websites, there is a chance you noticed few pictures. Afterwards, I read more about LEGO artist Nathan Sawaya. What impressed me is a power of “simple LEGO brick”. A simple […]

Share
Read the full article →