Scale is one of the most fancy word which is coming in place when different technologies are discussed or debated. So, speaking about cloud CAD, PDM and PLM, the discussion must go towards “scaling factor” too. Since PLM Cloud Switch finally happened, it became clear, vendors and customers will have to go down from buzzwordy statements about “cloud” into more specific discussions about particular cloud technologies they are using. Very often, cloud deployment is related to so called IaaS (infrastructure as a service) used by vendors to deploy solutions. PLM vendors are using IaaS as well. I’ve been talking about it a bit in my post – Cloud PLM and IaaS options. In my view, Siemens PLM provided the most bold statements about following IaaS strategy in delivery of cloud PLM solutions. At the same time, I believe all other vendors without special exclusion are using variety of IaaS options available on the market from Amazon, Microsoft and IBM.
An interesting article caught my attention earlier today – Google nixes DNS load balancing to get its numbers up. Article speaks about Google demoing cloud platform scaling capabilities. Google blog articles provides a lot of details about specific setup used for tests and measurement:
This setup demonstrated a couple of features, including scaling of the Compute Engine Load Balancing, use of different machine types and rapid provisioning. For generating the load we used 64 n1-standard-4’s running curl_loader with 16 threads and 1000 connections. Each curl_loader ran the same config to generate roughly the same number of requests to the LB. The load was directed at a single IP address, which then fanned out to the web servers.
It is not surprising that Google put some competitive statements trying to differentiate itself from their major competitor – Amazon. Here is an interesting passage from Gigaom writeup:
“Within 5 seconds after the setup and without any pre-warming, our load balancer was able to serve 1 million requests per second and sustain that level.”… this as a challenge to Amazon Web Service’s Elastic Load Balancing. ”ELBs must be pre-warmed or linearly scaled to that level while GCE’s ELBs come out of the box to handle it, supposedly,” he said via email. Given that Google wants to position GCE as a competitor to AWS for business workloads, I’d say that’s a pretty good summation.
The discussion about cloud platforms and scalability made me think about specific requirements of cloud PLM to scale and how it can be related to platform capabilities. Unfortunately, you cannot find much information about that provided by PLM vendors. Most of them are limiting information by simple statements related to compatibility with a specific platform(s). However, the discussion about scaling can be interesting and important. Thinking about that, I came to the 3 main group of scaling scenarios in the context of PLM: 1/ computational scale (e.g. when PLM system supposed to find design alternatives or resolve product configuration); 2/ business processing scale (e.g. to support a specific process management scale in transactions or data integration scenarios); 3/ data processing scale (e.g. required to process a significant data imports or analyzes). Analysis of these scenarios can be an interesting work, which of course will go beyond short blog article.
What is my conclusion? Coming years will bring an increased amount of platform-related questions and differentiation factors in PLM space and enterprise in general. It will come as a result of solution maturity, use cases and delivery scenarios. Cost of the platforms will matter too. Both customers and vendors will be learning about delivery priorities and how future technological deployment will match business terms and expectations from both sides. Just my thoughts…