PLM and a future of deep linking

PLM and a future of deep linking

deep-linking-plm

I like links. The things about links are fascinating. It is about connection between products, people, companies and things. The nature of our life today is to stay connected. Therefore links are important. It is natural to see links appearing everywhere in engineering and manufacturing too. Think about a product and its dependencies. The information is naturally linked between assemblies, parts, documents, bill of materials, materials, suppliers, manufacturing, shop-floor, orders and many other things.

The nature of things in PLM to be connected. At the same time, the reality of engineering software is in fact highly fragmented. The time when vendors and customers believed in a single system (or database) that can contain and manage all information is over. Customers are using multiple applications and it is unusual to see two or more PLM systems in the same company. When it comes to supply chain, the situation is even more complex.

Application integrations remains one of the most painful aspect of enterprise software and PLM can clearly lead the wave of complexity involved into implementations. My article yesterday –How PLM can avoid cloud integration spaghetti was a warning to all folks that imagine that cloud will be a silver bullet to kill application pain. It doesn’t. The cloud integration can be sometimes even more complex compared to traditional integration hacks using SQL and ETL tools.

I want to continue discussing the topic of cloud integration. The topic I’m bringing today is related to so called “deep linking”. If you’re not familiar with the topic, navigate to the following Wikipedia article – Deep Linking – to get some background information.

In the context of the World Wide Web, deep linking consists of using a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website (e.g.http://example.com/path/page), rather than the home page (e.g. http://example.com/). The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not actually make any distinction between “deep” links and any other links—all links are functionally equal. This is intentional; one of the design purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called “deep” linking is therefore built into the Web technology of HTTP and URLs by default—while a site can attempt to restrict deep links, to do so requires extra effort. According to the World Wide Web Consortium Technical Architecture Group, “any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole”.[1]

TechCrunch article – A brief history of deep linking brings an interesting perspective of trajectory of deep linking development on the web and in app word. Below is my favorite passage. It is important since it gives a notion of how to threat standards in internet and application world.

In order for me to write this article, and for you to be able to read it, we have to share a common language: modern English. The same holds true for directing users through deep links — in order to construct a deep link that an application will understand, we need to have some shared way of expressing information or addresses. In software engineering, a well-defined shared vernacular is defined by a “standard.”

The problem with standards, though, is that many of them do not actually become standard practice, and introduce as much fragmentation as they resolve. I could define the word “basilafacitarian” as “a person who likes basil a lot,” but unless it enters the common vernacular, it’s useless as a means of communication for me to tell you that I like basil.

The same is true for an app speaking to another app; unless the URL “myapp://show-item/id123” is mutually agreed upon, there’s no guarantee what the receiving app will do with it.

Coming back to PLM world, we can see customers and vendors are struggling with data and application fragmentation. It is not getting any better in cloud PLM world – we just move to another set of APIs and technologies.

What is my conclusion? The idea of setting standards for deep linking is interesting. It can provide some level of solution to stop proprietary silos, data islands and data pumping challenges to send data between applications back and forth. It is not simple and requires some level of synchronization between vendors and customers. We have already enough history on the web and in app world to learn and correct the course to control data and make it transparent at the same time.  Just my thoughts…

Best, Oleg

Image courtesy of Stuart Miles at FreeDigitalPhotos.net

 

Share

Share This Post

  • Jeremie FEBURIE

    Hi Oleg,

    A few month ago, at PCO Innovation and today known as Accenture, we developed a Deep linking technology above Siemens PLM Teamcenter.

    The idea behind was to make user able to generate themselves a meaningful hyperlink and share it with anybody inside the company

    eg : http://teamcenter/123456 where 123456 is the unique id of an item known in the company

    (by default, Teamcenter generates non meaningful url like http://teamcenter/avFcXyuuoob …)

    But in PLM, we don’t only manage items, we also manage their history with the help of revision concept

    We were also in a deeper way in allowing user to reach a specific revision directly by the hyperlink (123456-RevA)

    And for them who don’t know what is the latest revision : we introduced a keyword like 123456-LATEST

    Here are some examples :

    LATEST : Latest revision whatever its release status

    LATEST_WORKING : latest revision with no release status

    LATEST_APPROVED : latest revision with Approved Status

    This system is very powerful and the customer loves it.

    But maybe, some details are specific to PLM domain (revision selection) and cannot be exposed to other applications …

    Just my thoughts… 🙂

  • beyondplm

    Hi Jeremie, thanks for sharing these examples! Do you think there is a potential to share it between PLM vendors? My point is how to develop a stable URLs that can be used by multiple PLM vendors to share data with a same semantics. The same URLs can become a foundation of RESTful services. I’d love to learn more about what you did if you can share that. Best, oleg

  • Jeremie FEBURIE

    Hi,
    The idea behind our permalink wasn’t to share those links between external application and neither to share link between other PLM system.
    It was just to give the ability to any user to share an item link without the need to get the link from the system.
    With this tool, they can do it manually and offline … they just need to know the item id to share (generally they know it …)
    Your blog post made me think about this tool, that all …
    BR

  • beyondplm

    Jeremie, thanks for this clarification! it makes sense. I just took it one step forward and thought about the idea to share this link programmable and leverage it in communication and coordination between different tools. Web APIs are doing exactly that and this is an example where REST API can simplify future integration. You might be interested to take a look on this post –

    How PLM can avoid cloud integration spaghetti?

    http://beyondplm.com/2015/06/18/how-plm-can-avoid-cloud-integration-spaghetti/

    Best, Oleg

  • There have been layers in place to support some level of deep linking for some time across a variety of different solutions, but really only for consumption. The bigger problem is when you move beyond consumption… when deep linking needs to operate within the context of authoring and collaboration. Standards for just consumption are hard enough already

  • beyondplm

    Ed, I agree – “read” is easier to support. But if you think about stateless REST API, context should come naturally. Otherwise, the whole concept will fail. Otherwise, I missed some of your points about “hard enough standards”. Please advise. Best, oleg

  • Forgot to respond to this : ) Hopefully this makes more sense. : )

    My comments on standards w/r to consumption and deep/linking… Creating the standards or open interfaces to share read only information is considerably simpler. You just need to understand enough of the schema to display and retrieve the right information. You still have fragmentation to fight but it’s achievable.

    But just linking statically is not going to be enough. People want to interact and transform the data – authoring and/or collaboration. Even outside the system that either created or is primarily responsible for the data. In that case it’s more than just schema across systems, it’s applying governance and security models and that’s much more complicated.

  • beyondplm

    Ed, Thanks for your comment! if you think about gigantic global goal of standard enabling, you are right. However, there is a chance to boil the ocean. I think, small steps towards that goal are important too. The ability to expose some interconnected information with semantic links can be an interesting first step – just my opinion.