Wednesday, July 20, 2016

Restore Point Modification With DVS 2.3.1

One of the cool features in DeltaV Virtual Studio (DVS) is the ability to provide disaster recovery capabilities using Virtual Machine (VM) Replication.  VM Replication creates and constantly updates a replica image of a running virtual machine in a separate cluster or replication server.



By locating the two clusters in different locations, a disaster in one location can be mitigated by starting up the replica image in the second location.

Now in addition to disaster recovery, virtual machine corruption can also be addressed with replication.  By default, VM Replication within DVS provides two restore points, allowing recovery from an image that is up to two hours old.  Here’s the procedure to increase the number of recovery points and select the required recovery point if a corruption occurs.

A couple of important points before I go on – increasing the number of recovery points increases the disk space requirements for the replica image.  Also, adjusting the number of recovery points has to be done in Hyper-V, not DVS.  In next year’s release of DVS, version 3.3, you’ll be able to modify the number of restore points from directly within DVS.

From the DeltaV Virtual Studio domain controller desktop, access the Hyper-V manager:


While the recovery points are associated with a replica image, managing the recovery points is done from the primary virtual machine.  You can see how many recovery points (snapshots) are available by selecting the replication location and the replica virtual machine.  Be sure to select the Replication tab at the bottom of the dialog box:


To change the number of recovery points, select the host and then the primary virtual machine that requires recovery point modification, then right click on the virtual machine and select Settings…


On the Settings dialog box, scroll down the left hand pane, click to expand the Replication section, then select Recovery Points:


The Recovery Points screen will show the current number of recovery points and the estimated storage requirement.  Either by using the up and down arrows or by simple typing in the field, change the number of recovery points.  The estimated hard drive space will change:


Once you click OK, Hyper-V will begin keeping more restore points, once an hour, until the newly entered number is achieved.  Checking the replica again will show all the configured snapshots:


If a corruption is ever detected in a primary virtual machine, use the Hyper-V manager to start the replica image and pick a recovery point.  DO NOT USE DeltaV Virtual Studio to start the replica – this will automatically use the most recent snapshot and will discard all the other snapshots.

Thursday, July 7, 2016

Using Fault Detection to Help Increase Process Understanding

We released our Batch Analytics (BA) application with version 12 of DeltaV and provided some additional enhancements as part of version 13.  BA uses Multivariate Analysis and Dynamic Time Warping to detect process faults, the reasons for those faults, and predicts endpoint quality, all in real time.  So instead of having to wait until the batch completes to find out there was a problem, fault and quality issues can be examined while the batch is still running.  This allows operations and engineering personnel to make better decisions that could correct a quality issue, dump a bad batch early, or schedule maintenance for when a unit is not in use.


Another important benefit is the education of inexperienced personnel to gain process understanding.  One of the features of the fault detection screen within BA is to prioritize parameters that are contributing to a fault:


The small green band at the bottom of the screen is the normalized range of the two fault parameters, T2 and Q.  A fault in this range (0 to 1) is statistically insignificant.  The larger the fault peak (the large blue T2 peak is around 55), the more statistically significant the fault is.  By selecting the user-friendly parameter names on the left, response plots of actual versus modeled are displayed:


The black lines above are the actual parameter response, while the dashed and dotted blue lines are the expected, modeled response.  For instance, you can see that M1 Level didn’t increase as much as the model thought it should have and the Salt Bin Level didn’t drop as much as the model predicted it should have.

But you don’t have to have a fault to monitor actual versus modeled trajectories.  Here’s a fault detection screen from a normal batch:


You’ll notice that while some of the peaks exceed the normalized 0 to 1 range, the largest fault is less than 2.5, compared to the 55 on the previous example.  My point is you can still see the parameters on the left hand side and can plot out their actual versus modeled response:


Notice that the mixer and salt bin levels followed the predicted, modeled response.  This can provide a great tool for training operations personnel to understand what “normal” is to be better prepared for process upsets and faults.