Is the Convergence of Enterprise IT Modernization and the Mainframer Skills Gap a Recipe for Disaster?

What do you get when you combine a technology that, on the surface, many wrote off years ago, but that in reality is growing at an accelerated pace, with a complete paradigm shift toward modernizing how the technology is used? Now throw in a well-publicized lack of young talent to carry it forward as the older generation ages out, and you get opportunity on every level.

Cloud computing isn’t growing at a break-neck pace because it’s that latest fad. DevOps hasn’t taken hold because it sounds cool. And automation isn’t the wave of the future because journalists need something to write about. These are just three of the many ways better software is being developed faster. And as more efficient ways to utilize it are emerging, enterprise IT can continually do more with less.

A byproduct of all this progress is the ever-increasing volume of data generated every second of every day. Accessing, using, storing, retrieving, and archiving data is something DTS has been innovating since our inception in 1991. It’s why we’re trusted by the Global 2000 today.

There’s a big difference between building a technology from the ground up and buying someone else’s work and throwing it into your product mix. As pioneers in dynamic disk space recovery and volume pooling technologies, we are best positioned to apply our technology to solve real-world problems, saving our clients precious time, resources, and often a LOT of money.

How a Software Purchase Saved Our Client From an Expensive Mainframe Hardware Upgrade
One such instance was a client with a sprawling IT environment and data housed in large data centers in the Pacific Northwest. The client is one of the largest international communications companies in the world, supporting hundreds of thousands of users, and the data and applications those users consume.

Like many mainframe shops of today, this client lacked extensive knowledge of assembler and PL/1 languages. They had many obsolete, highly customized installation exits from decades past and were having a specific problem involving emergency logons to TSO.

They needed a long-term solution that addressed the lack of expertise in assembler and PL/1 languages and were looking to avoid a costly mainframe hardware upgrade. After attending a DTS Software Monthly Educational Webinar Series event, the client was convinced we could provide the solution they’d been looking for, both in problem-solving assistance and upgrading legacy code without learning less-utilized coding languages.

In this Case Study, you can read about how DTS engineers were able to formulate a quick, user-friendly solution to rewriting legacy exits, saving the client time, money, and frustration. With our Easy/Exit product and a team of seasoned storage management experts behind it, the client was able to continue operating within their existing z/OS environment, avoiding a costly mainframe hardware upgrade with services.

When asked about their experience with DTS, the Datacenter Manager for this client said, “Some of these exits were older than our younger programmers and we don’t know who did the original assembler coding, nor do we have the expertise to update it. So, it was great that we found a partner with a solution that reduced the headache associated with updating these exits. DTS’s Easy-Exit utility was easy to install, and their policy rules engine gave us a workaround for the assembler and PL/1 problem we were staring at.”

Automated Storage Management That’s Quick to Deploy and Easy to Learn for Fortune 500 Financial Client
Another recent success story comes from a Fortune 500 client who came to DTS Software looking for help in their IT modernization initiatives. Their legacy systems were governed by JCL dating back decades, and they needed software that could help them update code without taking up much of their systems administrators’ time and effort. You can read more about it in this Case Study.

This client has been in business for nearly half a century and has a vast IT environment with a dozen production systems and many more test LPARs in several data centers across the US, along with hundreds of analysts and tens of thousands of end-users. As one of the largest financial services organizations in the world operating in a heavily regulated industry, disruptions, downtime, and noncompliance were unacceptable.

Freeing Up Resources for More Strategic Initiatives
They needed help implementing automation for their repeatable storage management tasks, which freed up human resources so they could focus on other more valuable modernization initiatives. The solution needed to be easy to learn, quick to deploy, and come with a competitive total cost of ownership.

Our ACC (Allocation Control Center) Monarch solution was deployed to enable them to run more reliable jobs with fewer failures while enforcing SMS standards and preventing unwanted time and resource-consuming disk allocation and space errors. With ACC Monarch, the financial services company was able to automate much of its workflow so that its programmers no longer had to manage dataset policy through a series of emails and memos.

ACC Monarch was an ideal fit, as it is a system-level product that can examine each file selected for use to ensure consistent standards across the client’s vast computing landscape. Additionally, ACC Monarch gives users flexible control to easily examine, override and record JCL control statements and warn of incorrect usage, and manage datasets with easy-to-understand policy rules.

One of the many ways we saved them time and human resources was to ensure that standards for VSAM RLS (Record-Level Sharing) attributes and other VSAM attributes were valid and consistent for the datasets they were associated with. This sort of logical enforcement of complex standards (as opposed to merely syntax or validity-checking) is something that is only possible with a flexible policy rules language like that provided by ACC Monarch.

According to their Datacenter Manager, “DTS helped us standardize and automate some modernization initiatives we had taken on. Their policy rules engine was simple and pretty straightforward, enough so that our less-experienced storage admins could manage batch jobs without interrupting more seasoned programmers who had the JCL experience.”

Best-in-Class Products Back by Best-in-Class Customer Service
We take the mainframe support for your business seriously and back it up with a customer service program that is second to none. We consistently receive the highest marks in response time and quality of service from phones manned by actual humans with in-depth knowledge of our products. You won’t realize it until you need us, but that’s when we’ll shine!

The Original Storage Management Experts
To put the power of over 30 years of storage management expertise to work for your business. Contact us at [email protected] to schedule a demo or start your free one-year trial of any DTS product.  And be sure to join us each month as we present complimentary educational webinars on topics of great interest and import to today’s mainframer community.

DTS Webinar Recap: How to Properly Back Up and Restore a Dataset in IBM® z/OS®

The handling of datasets as they move up and down the hierarchy is a key focus for any data center manager. When a dataset becomes damaged or inaccessible, being able to recreate it from a backup is paramount.

While data center-wide backups are usually performed by the storage management department, restores are often the responsibility of the user. There are many complexities to creating and maintaining backup systems but restoring is often more straightforward. Understanding the programs, mechanisms, and commands involved with creating and especially locating backups and performing restores is an important skill for the z/OS user.

Taking backups isn’t something to be overlooked or done on occasion. The question is who takes them, why are they taken and where do they fit in the storage management scheme? In a recent webinar, DTS Software CTO Steve Pryor discussed the major backup systems (DFSMShsm, FDR, CA-Disk), their functions and operation, and how to restore backed-up datasets.

Basic but Critical Functions
Backup and restore are basic functions, but they are also critical functions, especially for large data centers when a dataset gets damaged, deleted by accident, or there is a hardware failure. This could also apply to entire volumes or storage groups in a disaster recovery situation.

Data Availability Management vs Space Availability Management
Storage management in z/OS is usually divided into two camps: data availability management and space availability management. What is the difference and what are their goals?

The goals of data availability management are to ensure the datasets that are supposed to be backed up are backed up, that they are backed up as frequently as needed, and that they are available to be restored when needed. The lifecycle of a dataset as it pertains to data availability management is short and often several versions are vaulted.

The purpose of space availability management is to ensure sufficient free space is available to allow the production workload to run. This involves managing the dataset’s movement through the hierarchy from primary disk to compressed disk, tape, or even to the cloud.

With space availability management, datasets are deleted off the disk when a backup is completed (usually referred to as migrates or archives). The dataset’s lifecycle is much longer, generally only one or two copies exist, and a recycle function must be performed as the dataset ages on the archive media.

The Most Frequently Used Backup Types in z/OS
Backup types and the programs most widely used to perform backups in z/OS were the next points of discussion in the webinar. Backups are generally taken by data center management, storage administrator, the operators, or by automatic function.

Full volume backups occur when backing up all the datasets in a particular volume. Dataset backups occur when individual datasets are backed up. And one of the most common backup types is an incremental backup, which is typically done on a volume basis. Incremental backups simply mean that only the datasets that have changed are backed up, and these are usually done daily. Other less-frequently used backup types were also mentioned.

From here, Pryor focused on HSM and FDR, two of the most widely used backup and restore utilities. He then took a deep dive, giving numerous examples and how-to explanations, and offered a “cheat sheet” of terminology with HSM versus FDR, which was included in the presentation slide deck.

You Have the Backup, Now How Do You Restore it?
The remainder of the webinar focused on restoring the data when you need to, whether that be from disk, tape, or the cloud. Pryor once again presented a number of how-to examples and discussed the different approaches depending on the utility you are using.

Backup and Restore Reference Resources
If you need to find more reference material on backing up and restoring in z/OS, Pryor once again provided information on key reference resources available to those who need it. Steve is also available via email to answer questions about this topic. He can be reached at [email protected].

Learn More in Our Webinar Available On-Demand
As with each of our monthly webinars, “How to Back Up and Restore a Dataset in z/OS” is a 60-minute informative and educational look at an important topic in the mainframe space. It includes numerous examples, how-to guides, and references on where to find more information should you need it.

If you weren’t able to attend or would like to review the material presented, you can view it on-demand and download a copy of the slide deck by using this link. Be sure to join us each month for our complimentary webinar series. Go to www.dtssoftware.com/webinars for more information.