How Is the Automotive Industry Handling the New Industrial Revolution?

Bill Gates is alleged to have once quipped that “If GM had kept up with technology like the computer industry has, we would all be driving $25 cars that got 1,000 MPG.” Even though the authenticity of this quote is questionable, it has been circulated throughout the internet for years because there is something about the sentiment that rings true to us. It certainly does not seem that the automotive industry has kept up with advancing technology the way that the computer industry has.

This may be due in part to the manufacturing infrastructure that has evolved over the years. Making sweeping upgrades to equipment and/or processes seems a very expensive and risky proposition. When you couple this with the fact that many automobile manufacturers today struggle to find enough demand for their current supply, it is easy to understand why keeping up with the latest technology isn’t always a top priority.

The problem with this reluctance, though, is that automobiles are not inexpensive consumables that people buy casually. Customers expect vehicles to come with the highest standards of safety and efficiency. Customers expect the latest technology possible. How can manufacturers keep up with this demand for innovation without changing their processes?

It seems that some manufacturers are beginning to embrace the ways of the modern industrial world, and are finding ways to align their business models with the current wave of interconnectivity and streamlined automation.

Honda Manufacturing of Alabama

Honda’s largest light truck production facility in the world – a 3.7 million square foot plant – was faced with a problem all too common to large manufacturing facilities. Over the years, a number of different automation systems were introduced to help streamline production. With operations including blanking, stamping, welding, painting, injection molding, and many other processes involved in producing up to 360,000 vehicles and engines per year, it is not surprising that they found themselves struggling to integrate PLCs from multiple manufacturers, multiple MES systems, analytic systems, and database software from different vendors.

Of course, on top of these legacy systems, Honda continued to layer an array of smart devices on the plant floor and embed IT devices in plant equipment. The complexity introduced by this array of automation systems turned out to be slowing down the operations they were intended to streamline.

After reorganizing their business structure to merge IT and plant floor operations into a single department, Honda proceeded to deploy a new automation software platform that enabled them to bring together PLC data with the data coming from MES and ERP systems into a common interface that allowed the entire enterprise to be managed through a single system. This also allowed Honda to manage and analyze much larger data sets that revealed new opportunities for further optimization. While this reorganization required a significant investment of resources, they were able realize benefits immediately, and ultimately positioned themselves to maintain a competitive edge through the next decade or more.

Ford Motor Co.

Ford Motor Company operates a global network of manufacturing operations, and have had difficulty when trying to promote collaboration and share best practices between their various plants. They found a solution using technology based on the Google Earth infrastructure.

Ford was able to develop a cloud-based application that stores 2D and 3D representations of Ford’s global manufacturing facilities, and allows users to navigate through these virtual environments, place pins, and upload video, images and documents to these pins that are shared throughout Ford’s global operations. Engineers and operators can share information about current plant conditions and procedures, which can be accessed in real time from anywhere in the world. The accumulated data can be used for training or to update standard procedures. By creating a global collaborative tool, Ford has created a means of ensuring that each and every one of their employees has the latest, most accurate information on how to best perform a particular task or how to avoid a problem that was encountered elsewhere.

We will have to see in coming years whether or not these innovations will lead to improved market performance for either of these manufacturers, but in the meantime it is probably safe to expect other companies to follow suit. With the advances in manufacturing technologies and machine-to-machine communication, it is becoming very difficult to remain competitive without playing by the same rules as everyone else. Industrial technology has advanced to the point that we are experiencing what people refer to as a new industrial era – or Industry 4.0. Reluctance is no longer a viable option.

5 Emerging Technologies Among Java Developers in 2018

1) Unit Testing:

In the event that you need to improve as an engineer in 2018, at that point you should take a shot at your unit testing aptitudes. What’s more, not simply unit testing, but rather robotized testing? This likewise incorporates combination testing. You can learn JUnit 5 and other propel unit testing libraries like Mockito, Power Mock, Cucumber, and Robot to take your unit testing expertise to next level. Mockito is extremely effective and enables you to compose a unit test for complex classes by taunting conditions and simply concentrating on the items under test. In the event that you an apprentice in unit testing and need to learn it in 2018, you should gear up and work harder to compete your rivals.

2) Big Data and Java EE 8:

Big data has been a very trendy and encouraging field in the Software industry for the last 3 years. Plenty of jobs wait for the one who is comfortable with Big Data. This has been among top 10 technologies for the java developers in 2018. Many new features come with Java EE 8. Servlet 4.0 with support of http://2, new and improved JSON building and processing, improved CDI and Restful web services, new JSF version, new Java EE Security API are some of the updated versions in the field. But majority of back-end developers tend to pick Spring as their technology for java in 2018.

3) Node JS:

Today, we are pleased to have a platform that is built on the Chrome’s Java Script runtime known as Node.js. This has helped a great deal for easy building of the fast and scalable network applications in the dynamic world today. The code has the property of being lightweight as Node.js is based on an event-driven, non-blocking I/O model. This has emerged as recent trends in the technologies employed by the java developers of 2018. It is very efficient and is perfect for data intensive and Real Time (RT) applications that may run across any number of the distributed devices.

4) Design Patterns and Readability of the Content:

No doubt, design patterns are neither are a technology nor a framework, yet they are the field of discussion among the java developers in 2018. Even in the present scenario, readable, clean and maintainable code is the goal of many java developers and it has to be this way only.

5) Angular and React:

If want to be known as a full-stack developer, it is mandatory that you have considerable knowledge in front-end technologies too. For building an attractive and eye catching presentation layer of the web-app, Angular and React offer the opportunity to do this in a more convenient and time efficient manner. Though React and Angular are not the only options available nowadays, but still their growth and popularity is evident from the positive reviews given by the end consumers.

Programming Language Migration Path

While I was preparing some personal background information for a potential client, I was reviewing all the programming languages ​​that I have had experience with. I list languages ​​that I'm most experienced with on my resume. However, it occurred to me that if I was to list all the languages ​​that I've worked with, then the client would become overwhelmed with the resume and just write me off as either a total bit head or looney toons. But as I reflected on all these different environments I realized how much fun I've had being involved with the software development industry, and that a lot of that fun has to do with the learning process. I think this is what makes a good programmer. Not just the ability to write code, or come up with a very creative application, but the ability to learn. Lets admit it! If a programmer does not have good learning skills, then the programmer is going to have a very short career.

As an exercise, I'm going to list out my Programming Language Migration Path. I would be interested to hear from other programmers what their PLMP is as well. Here goes:

* Commodore Vic-20 Basic

* Commodore Vic-20 6502 Assembler

* Commodore 64 6510 Assembler (Lots of all nighters with this one!)

* IBM BASIC

* IBM Assembler (My hate relationship with segment addressing.)

* dBASE II (Wow! Structured programming.)

* GWBasic

* Turbo Pascal (Thank you Mr. Kahn! Best $ 49 I ever spent!)

* Turbo C

* dBASE III + (Cool, my dBASE II report generator now only takes 2 hours to run instead of 7.)

* Clipper / Foxbase

* dBASE IV

* dBASE SQL

* Microsoft C (First under DOS, then under Windows 3.1)

* SuperBase (First under Amiga DOS, then for MS Windows)

* SQL Windows (Whatever happened to this? Gupta?)

* Visual Basic 2.0

* Delphi

* Visual Basic 3.0

* Access Basic / Word Basic (Microsoft)

* Newton Script (My first "elegant" language)

* Visual Basic 4.0 & 5.0

* HTML

* FormLogic (for Apple Newton)

* Codewarrior C for Palm OS

* Visual Basic 6.0

* NS BASIC for Palm OS & Windows CE

* FileMaker 5

* Satellite Forms

* Visual C ++

* REAL Basic for Mac 9.x & OSX

* Java

* Codewarrior C ++ for Palm OS

* Appforge for Palm OS & Pocket PC

* C #

* FileMaker Pro 7.0

Whew! Not only is this a good exercise to reflect on all the languages ​​that I've worked with, but it is a good example of how the languages ​​and the technology has progressed during the past 25 years. I'm sure that I'll be adding much more to this PLMP in the near future as well. And as with most programmers I know, there is so much more that I would like to learn but just don't have the time.

Another good exercise is to bring this up as a topic of discussion with a group of programmers after a nice long day at any technical trade show. For example, quite some time ago, after a long day at the OS / 2 Developers Conference in Seattle (Yea, dating myself here.), I brought up the topic of 6502 Assembly Language programming. This was during dinner at around 7pm. The resulting conversation migrated to the hotel lobby where it continued until around 2am in the morning. (Ah, the good ol 'days.);)

(If you're a developer, I'd be interested in seeing your own personal Programming Language Migration Path. Shoot me an email to timdottrimbleatgmaildotcom.)

Timothy Trimble, The ART of Software Development

The Evolution of Python Language Over the Years

According to several websites, Python is one of the most popular coding languages of 2015. Along with being a high-level and general-purpose programming language, Python is also object-oriented and open source. At the same time, a good number of developers across the world have been making use of Python to create GUI applications, websites and mobile apps. The differentiating factor that Python brings to the table is that it enables programmers to flesh out concepts by writing less and readable code. The developers can further take advantage of several Python frameworks to mitigate the time and effort required for building large and complex software applications.

The programming language is currently being used by a number of high-traffic websites including Google, Yahoo Groups, Yahoo Maps, Linux Weekly News, Shopzilla and Web Therapy. Likewise, Python also finds great use for creating gaming, financial, scientific and educational applications. However, developers still use different versions of the programming language. According to the usage statistics and market share data of Python posted on W3techs, currently Python 2 is being used by 99.4% of websites, whereas Python 3 is being used only by 0.6% of websites. That is why, it becomes essential for each programmer to understand different versions of Python, and its evolution over many years.

How Python Has Been Evolving over the Years?

Conceived as a Hobby Programming Project

Despite being one of the most popular coding languages of 2015, Python was originally conceived by Guido van Rossum as a hobby project in December 1989. As Van Rossum’s office remained closed during Christmas, he was looking for a hobby project that will keep him occupied during the holidays. He planned to create an interpreter for a new scripting language, and named the project as Python. Thus, Python was originally designed as a successor to ABC programming language. After writing the interpreter, Van Rossum made the code public in February 1991. However, at present the open source programming language is being managed by the Python Software Foundation.

Version 1 of Python

Python 1.0 was released in January 1994. The major release included a number of new features and functional programming tools including lambda, filter, map and reduce. The version 1.4 was released with several new features like keyword arguments, built-in support for complex numbers, and a basic form of data hiding. The major release was followed by two minor releases, version 1.5 in December 1997 and version 1.6 in September 2000. The version 1 of Python lacked the features offered by popular programming languages of the time. But the initial versions created a solid foundation for development of a powerful and futuristic programming language.

Version 2 of Python

In October 2000, Python 2.0 was released with the new list comprehension feature and a garbage collection system. The syntax for the list comprehension feature was inspired by other functional programming languages like Haskell. But Python 2.0, unlike Haskell, gave preference to alphabetic keywords over punctuation characters. Also, the garbage collection system effectuated collection of reference cycles. The major release was followed by several minor releases. These releases added a number of functionality to the programming language like support for nested scopes, and unification of Python’s classes and types into a single hierarchy. The Python Software Foundation has already announced that there would be no Python 2.8. However, the Foundation will provide support to version 2.7 of the programming language till 2020.

Version 3 of Python

Python 3.0 was released in December 2008. It came with a several new features and enhancements, along with a number of deprecated features. The deprecated features and backward incompatibility make version 3 of Python completely different from earlier versions. So many developers still use Python 2.6 or 2.7 to avail the features deprecated from last major release. However, the new features of Python 3 made it more modern and popular. Many developers even switched to version 3.0 of the programming language to avail these awesome features.

Python 3.0 replaced print statement with the built-in print() function, while allowing programmers to use custom separator between lines. Likewise, it simplified the rules of ordering comparison. If the operands are not organized in a natural and meaningful order, the ordering comparison operators can now raise a TypeError exception. The version 3 of the programming language further uses text and data instead of Unicode and 8-bit strings. While treating all code as Unicode by default it represents binary data as encoded Unicode.

As Python 3 is backward incompatible, the programmers cannot access features like string exceptions, old-style classes, and implicit relative imports. Also, the developers must be familiar with changes made to syntax and APIs. They can use a tool called “2to3” to migrate their application from Python 2 to 3 smoothly. The tool highlights incompatibility and areas of concern through comments and warnings. The comments help programmers to make changes to the code, and upgrade their existing applications to the latest version of programming language.

Latest Versions of Python

At present, programmers can choose either version 3.4.3 or 2.7.10 of Python. Python 2.7 enables developers to avail improved numeric handling and enhancements for standard library. The version further makes it easier for developers to migrate to Python 3. On the other hand, Python 3.4 comes with several new features and library modules, security improvements and CPython implementation improvements. However, a number of features are deprecated in both Python API and programming language. The developers can still use Python 3.4 to avail support in the longer run.

Version 4 of Python

Python 4.0 is expected to be available in 2023 after the release of Python 3.9. It will come with features that will help programmers to switch from version 3 to 4 seamlessly. Also, as they gain experience, the expert Python developers can take advantage of a number of backward compatible features to modernize their existing applications without putting any extra time and effort. However, the developers still have to wait many years to get a clear picture of Python 4.0. However, they must monitor the latest releases to easily migrate to the version 4.0 of the popular coding language.

The version 2 and version 3 of Python are completely different from each other. So each programmer must understand the features of these distinct versions, and compare their functionality based on specific needs of the project. Also, he needs to check the version of Python that each framework supports. However, each developer must take advantage of the latest version of Python to avail new features and long-term support.

Harri has an avid interest in Python and loves to blog interesting stuff about the technology. He recently wrote an interesting Python blog on http://www.allaboutweb.biz/category/python/.

The High-Performance Application Factor in Dot NET

This Common Language Runtime was initially designed to get the high-performance as per its demand. This article all about high-performance applications in .NET and is not at all an exhaustive discussion.

We will discuss following topics under this piece of article;

1. The key points about .NET that needs to be remembered all the time.

2. Comments of 2 industry experts on this topic.

High-Performance Application in .NET is the priority. When it comes to interest about high performance in .NET there are many points that should always be kept in mind. Let's have a look at a few of them.

1. Key points about .NET that needs to be remembered all the time.

• Profiling APIs is like giving more granular metrics on memory allocations that will definitely be a huge improvement over other current APIs.

• Most of us must be aware tha .NET has a rich set of parallel programming APIs and libraries like the Task Parallel Library, Akka.NET, etc. The biggest challenge in this area is to educate and communicate with users. Making similar abstractions are very much convenient for the wider community.

• Significant and précised performance improvements are all about .NET since 4.0, which makes it worth in order to revise all the types of assumptions based on other older versions of the .NET Framework.

• Garbage collection is a major recurring theme in all types of high-performance scenarios, which led to many CLR and language improvements like Value Task as well as ref returns.

Variety of interests have been seen regarding. NET and it has increased to a vast array. When it is about .NET standard applications and the platforms for the same, it is not traditionally known by every professional ..NET has opened plenty of platforms for writing quality performance applications. In addition to this, there are plenty of doors that have opened for many platforms like IoT devices and mobile phones as well. These devices offer many options to solve new challenges than regular desktops and servers. NET Core is not yet known by many developers and commoners and hence it is still awaiting more success stories regarding high-performance applications. End-to-end .NET systems have many micro-services, server-less factor, and other containers. Each one of these has the capability of bringing own set of performance requirements.

2. Comments of 2 industry experts on this topic.

Let's have a look at a few facts that 2 famous industry experts have to say about the question that where .NET stands when it comes to 'Performance' factor and how does it can be compared with different mainstream platforms.

A. Maoni Stephens – the main developer of the .NET GC

B. Ben Watson – author of the books Writing High-Performance .NET Code and C # 4.0

Enough of discussions have been seen in this area and hence many trendsetters have also been observed.

A. Maoni Stephens – She is the main developer of the .NET GC

No doubt that .NET GC is one of the most described areas when we talk about its overall performance. She mentioned that as she works on it, her answers on this particular panel is going to be very much focused around the GC. According to her, many people may have the common belief that the.NET was and is always associated with only the high productivity and not the performance. There are plenty of products that are written on .NET that always has more of high-performance requirements. Hence, to make sure that our GC can very-well handle every possible requirement, before going ahead with shipping a significant feature, we should make sure that we test them through our internal coordinating team that is capable of handling stressful workloads in the complex world like Exchange or Bing. Maoni also adds that this is the best way that we don't have to completely rely on macro / micro-benchmarks that in spite of being very useful, it can also lack on representing the performance in the real-world scenarios. You can read plenty of her blogs about this topic.

Most of the customers want the best performance and we have to work accordingly.

B. Ben Watson – He is an author of the books Writing High-Performance .NET Code and C # 4.0

According to him, .NET is already in a strong position and still getting stronger at a rapid pace. The CLR team takes complete responsibility for performance and has also gained enough of achievement with respect to .NET, like JIT as well as the garbage collector. He says that my product has driven some of those necessary changes and it is absolutely gratifying to see the world getting benefitted with this and not just the largest application is Microsoft. The Performance is always the core work and any other platform that is about tradeoffs. .NET gives you some incredible and ultimate features in the runtime, but you have to play by its usual rules to get the best results and obviously to get the most out of it. This best possible level of performance that require the highest levels of engineering and other platforms will also have different tradeoffs, but the entire engineering efforts will always be there without any flaw. But there are also few weaknesses, of course. In a vast online world, where every request matters, factors like GC and JIT can get in the way of extreme performance. There are many solutions for the same, but it can also demand significant effort, depending on how much that performance is important to you.

Conclusion

Plenty of different trendsetters have been seen regarding this topic. We also know that we can run server applications on Linux that represent a précised and non-recognized areas for developers. When it is about high performance in .NET, discussed key points should be kept in mind and views of Ben Watson and Maoni Stephens will definitely be a great help.

Blockchain & IoT – How "Crypto" Is Likely Going To Herald Industry 4.0

Whilst most people only started to learn about “blockchain” because of Bitcoin, its roots – and applications – go much deeper than that.

Blockchain is a technology unto itself. It powers Bitcoin, and is essentially the reason why *so many* new ICO’s have flooded the market – creating an “ICO” is ridiculously easy (no barriers to entry).

The point of the system is to create a decentralized database – which essentially means that rather than relying on the likes of “Google” or “Microsoft” to store data, a network of computers (generally operated by individual people) are able to act in the same way as a larger company.

To understand the implications of this (and thus where the technology could take industry) – you need to look at how the system works on a fundamental level.

Created in 2008 (1 year before Bitcoin), it is an open source software solution. This means its source code can be downloaded edited by anyone. However, it must be noted that the central “repository” can only be changed by particular individuals (so the “development” of the code is not a free for all basically).

The system works with what’s known as a merkle tree – a type of data graph which was created to provide versioned data access to computer systems.

Merkle trees have been used to great effect in a number of other systems; most notably “GIT” (source code management software). Without getting too technical, it basically stores a “version” of a set of data. This version is numbered, and thus can be loaded any time a user wishes to recall the older version of it. In the case of software development, it means that a set of source code can be updated across multiple systems.

The way it works – which is to store a huge “file” with updates of a central data set – is basically what powers the likes of “Bitcoin” and all the other “crypto” systems. The term “crypto” simply means “cryptographic”, which is the technical term for “encryption”.

Irrespective of its core workings, the true benefit of wider “on-chain” adoption is almost certainly the “paradigm” that it provides to industry.

There’s been an idea called “Industry 4.0” floating around for several decades. Often conflated with “Internet of Things”, the idea is that a new layer of “autonomous” machinery could be introduced to create even more effective manufacturing, distribution and delivery techniques for businesses & consumers. Whilst this has often been harked to, it’s never really been adopted.

Many pundits are now looking at the technology as a way to facilitate this change. Reason being that the interesting thing about “crypto” is that – as especially evidenced by the likes of Ethereum – the various systems which are built on top of it can actually be programmed to work with a layer of logic.

This logic is really what IoT / Industry 4.0 has missed thus far – and why many are looking at “blockchain” (or an equivalent) to provide a base-level standard for the new ideas moving forward. This standard will provide companies with the ability to create “decentralized” applications that empower intelligent machinery to create more flexible and effective manufacturing processes.

Film Radiography is Declining in Industrial Testing Applications

DODGED THE DIGITAL DILEMMA

The shift from analog to digital technology has given a new lease of life for NDT applications in the industrial radiography market, thus, broadening the scope beyond traditional applications. Digital X-ray systems are proliferating with increased acceptance across all industry verticals, including highly regulated and traditionally conservative aerospace and automotive industries. The most significant contributing factor for this paradigm shift to digital X-ray systems is the cost-saving, which is 5-6 times more (in both computed and direct radiography) when compared to film-based systems. The shift is also being fueled by the bridge of gap by digital systems when it comes to high-resolution images, which used to be a niche for film radiography. Megatrends, such as Industry 4.0, Industrial Internet of Things, and Big Data, are expected to progressively phase out radiography on film.

ADVENT OF PORTABLE RADIOGRAPHY EQUIPMENT

The industry is experiencing significant influx of portable equipment in the recent years. With the need for inspection activities to be carried out at multiple locations and in various orientations, the industry demand for portable testing devices is increasing. The demand for compact and lightweight devices, which enable easier examination, has been a key trend in the market. Innovation in manufacturing technologies is propelling the deployment of these products. Elimination of installation costs with the use of portable devices, which helps in reducing the total cost of ownership (TCO) of these devices, is further helping the manufacturers strengthen their economic position in the market. The oil & gas industry, which employs testing across the industry supply chain for gauging the structural integrity and for continuous monitoring of intricate structures of various sizes, like plates, tubes and drilling machines, is expected to be among the most dominant end-users for portable radiographic equipment.

DIRECT RADIOGRAPHY TO BE THE GROWTH ENGINE

What's leading the pack in digital radiography? Direct radiography is the fastest growing type of radiography with near double-digit growth rate as compared to the overall market. This segment is aided by advancements in hardware, such as tubes, sources, and detectors, as well as software improvements facilitating better user-friendliness and efficiency. The advantages of direct radiography, including shorter exposure times, real-time applications, use of recognition software, reduced inspection time, environmental concerns, portability, and increased dynamic range (enabling multiple thicknesses to be inspected in one shot), are driving their adoption across all industry verticals. Direct radiography equipment is offering guaranteed high ROI to customers, which is the biggest contributing factor for their growth. Significant market opportunity for direct radiography (includes real-time) exists in automotive and aerospace segments, which are witnessing very high growth rates, even exceeding that of the overall direct radiography market.