Enterprise content management and business process automation platforms are important technology investments that help your business run faster and at a lower cost. But over time, these systems and databases need to be updated and maintained to ensure optimal performance – and prevent costly downtime. RPI Consultants has deep technical experience with ERP, ECM, and BPM products and solutions. In this webinar we share our system and database maintenance best practices.
John Marney:
Hello and thank you for joining us for another RPI Consultants Webinar Wednesdays. today our topic is system and database maintenance best practices. These are recommendations that apply broadly to just about any application. However, we have pulled this information from our knowledge and experience across the Perceptive, Kofax, and Hyland products.
So first, let’s take a look at our upcoming webinars schedule. Today we actually have two more webinars. At one o’clock central we have Perceptive Experience Content Apps. This afternoon, Mike and I will be back with you discussing strategies on how you can migrate your enterprise content and data into the cloud. That’s going to be a really good one, so please join us for that. Next month we have a Kofax themed webinar series, what’s new in Kofax TotalAgility 7.6, in the morning on November 6th, and in the afternoon talking about what’s new in Kofax ReadSoft Online. Both have had major updates recently.
If you haven’t joined us for our office hours, that’s a little bit different kind of webinar where we take a deep dive into a more technical topic. Those are on the third week of the month. So, Friday, October 16th, I can read that, Perceptive Content application plans and then in November we have a deep dive in the Perceptive Experience.
So many of you probably already know us, but my name is John Marney, I’m the Manager of Solution Delivery at RPI. I oversee our Content and Process Automation practice. That’s us. I’ve been the Software Automation Architect for around 10 years now, and I don’t take it lightly, but I call myself an AP Automation guru so please feel free to reach out about your AP automation needs.
Michael Madsen:
Hello, I’m Michael Madsen. I’m a Lead Consultant with the RPI office, primarily work with Brainware and Perceptive Content solutions dealing with back office and higher education. Also, the office Dungeon Master so we’ll roll this off with an initiative check.
John Marney:
So our agenda today, we’re going to actually break down our recommendations based on different types of applications, different types of servers. So, you have your application, your web, your database, talk a little bit about disaster recovery planning, and then we’ll take your questions. That said, feel free to toss your questions into go to webinar at any time and we will be sending a recording out of this webinar to everybody.
So first let’s talk about application servers. This is your primary application, so things like Perceptive Content or ImageNow server, your OnBase app server, Brainware, et cetera.
Michael Madsen:
So, first thing we’ll talk about is just some best practices around installation. So obviously, some things you’re to going to want to check are just your hardware specifications around what the software requires, any installation dependencies that you’re going to need to put together before installing the software.
So, a good example with that is if I have a Perceptive Content server installation package that I’m trying to run through, there are going to be some .net installations that I need to run through. The Perceptive Content installer will try to install those, but a lot of the times a service security won’t allow it to go through so a lot of that stuff you need to check beforehand.
John Marney:
So, most of the time you can install all of those dependencies via the features and rolls on a Windows server. We definitely recommend doing that before you ever run the application installer.
You usually want to run installers as an administrator and you want to run them locally. There are a few exceptions, but generally speaking this is what we recommend. And when I say locally what I’m really saying is don’t run it off of a network file share. Occasionally if you try to…whenever you run an installer it has to unpack files locally anyways and if you do that over the network you risk a small drop in your network activity and you lose files, and that install will fail. I’ve seen this repeatedly so I recommend copying it at local to the server before you run it.
Michael Madsen:
And then you also want to check your configurations between different business applications you may want to connect to or communicate with and verify that your network security pads are set up correctly because you could test something in a local drive but then when you switch it over to your shared network, everything breaks just because you didn’t check that security beforehand.
John Marney:
And much of what you want to check with your network paths is the account that runs the services for this application, does it have access to what it needs to have access to? And so that is part of the reason why we recommend you configure any services to run under a specific service account. So, passwords can expire so that permissions can change suddenly. It is just a industry wide best practice.
So next we’ll talk about some more maintenance. This is throughout the life of your application, things that you should be doing continually.
The first thing is you want to begin rotating your object storage. So, in Perceptive, this is your OSMs. In OnBase, this is your disk groups. You want to make sure that these are kept in smaller, more manageable chunks for easier archival and for retention management.
Michael Madsen:
Yeah, and then you also just want to be sure that you’re always cleaning up your log directories and your temp files. So, specifically with Perceptive Content, there are some things that run to zip up all of your log files in the morning, however, there isn’t always something that cleans up those zip files. So, you may go into the log directory thinking that you’re getting everything cleaned up every day when in reality you still have all of these zip files building up over years and years because nobody’s taken a look at the directory. So, it’s a good idea to put together some batch files that are linked in with a scheduled task or something like that that you can run on a schedule to make sure that everything’s clean and moving forward.
John Marney:
So Perceptive actually handles log cleanup more gracefully than a lot of applications do so you want to make sure that you’re not accruing a ton of disk space usage over time. And there are things like scripts, exports, system integrations that can store temporary image files or data on your server that you want to make sure get cleaned up as well so as to not unnecessarily accrue that.
Michael Madsen:
Also keeping an eye on those directories may point you towards some kind of leak or a failure inside of a script. So even if you’re not necessarily running into size restrictions or something like that, it might be just a good indicator to other issues.
John Marney:
On top of this, you want to actually check those log files for those kinds of failures. You could have something that is built out when you first implement and is fully tested and it works great for years, but it could be doing some sort of silent failures that don’t actually impact the business but impact you technically. And so checking your log files to make sure that what is supposed to be happening is happening is very important.
You also want to perform, on top of that, health checks on your response times and your system performance. So again, you may implement and be great for years, but many system performance issues don’t happen suddenly. They happen because of an accrual of poor practices over time and your users may not notice until one day you start receiving calls about people can’t log in or there’s errors popping up, what have you. You can help to mitigate that by checking your response times and your performance throughout the life of the product.
Michael Madsen:
Yeah, and there are a lot of third-party programs too that you can utilize to help you do that so that it’s not just all on your shoulders.
John Marney:
One thing specifically, we included a slide for high availability specifically for your application server because this is really important for user access and for disaster recovery and for other reasons as well. So high availability is referring to the ability to have, generally speaking, multiple active servers so that if a specific node or instance goes down or becomes unavailable, that users still have access to the application.
Michael Madsen:
Yeah, it’s one thing if Dev or Sandbox goes down, but if your prod environment goes down, you don’t have a backup strategy for that, then your businesses essentially shut off until you fix it.
John Marney:
High availability can also be important just from a pure user volume perspective. Adding additional application servers to your cluster allows for them to load balance and share, especially across regions so you can really set up high availability to work over a wham so that if you have users in different parts of the country, they’re accessing the server that is closest to them.
And for storage and backups, biggest thing is you need to understand the different types of storage mechanisms that exist and could be attached to your application server. The high-level types are here. Most often you’re going to either be using the direct storage, so a drive on a server, or your storage area network, your SAN. But you want to understand what some are capable of in terms of your backup and adding new disks, and the types that may have some things you can’t do.
Michael Madsen:
Yeah, and then talking about discs, setting up some kind of disc space monitoring program or some kind of alert to let you know when your desk is filling up will save you a lot of headache in the future because if you don’t have anything then your disc fills up. And if your disc fills up now you can cause potential software issues in your own environment that you could have resolved if you’d just added space to begin with.
John Marney:
In general, IT best practice, don’t store application data on the operating system drive, because if that fills up you may not even really be able to effectively use the operating system to fix it.
Michael Madsen:
Yeah. We see a lot of people who have maybe done their own upgrades or their own installations, and we go in there and they have all of their server files directly on the C drive, and that’s one of the first things that we’ll say, move this somewhere else, because if that goes down there’s nothing we can do really.
John Marney:
Bad news bears.
All right. Also, perform backups of your object storage multiple times a day. So, we’re talking about taking incremental backups that reflect all changes made to those objects. You generally want to do this in sync with your database incremental backups. Often we recommend every 15 minutes. And then don’t forget to back up the configurations as well.
Michael Madsen:
I mean inside of like iScripts and things like that, you may have tested an iScript to make sure that everything works. A good example, a lot of people aren’t going to see this anymore since Perceptive has moved away from Inmac, but it’s a good example for this topic. Inmac used to have a bug where if it failed it would just wipe out the entire INI file, so your entire configuration file is gone. If you didn’t have that backed up, now you’re having to rebuild the entire thing from scratch. You got to remember what network drives you were connected to, remember what index fields you were setting, so having a backup of those configuration files once you lock them in is generally a really good idea.
John Marney:
We’ve had clients who had great practices in backing up their database and backing up their image objects, but they didn’t back up their configurations. They’d come to us and they’d say, after a failure, “Well, isn’t everything in the database?” Unfortunately, not. Not usually.
Okay, so generally most of those recommendations carry forward into other server types. However, there are some things that are specific to web servers that we wanted to talk about. So again, we’ll start with installation.
Michael Madsen:
Yep. So, if we’re talking about web servers, we’re going to be talking about ports a lot, so verify that any of the ports that you need to connect through to get to your web application are open and that the connectivity to your client systems is good. Also, consider your architecture. If you want to include multiple web servers, it’s generally good to have some type of redundancy, especially if outside access is required.
John Marney:
Right. So, you may need multiple web servers purely for load balancing within your environment, but if you need an external access, which is fairly common, you want to utilize a reverse proxy to secure that user access from other user access.
And then you really want to use SSL. It’s surprising in this age how many people don’t use it. All you need is a simple search online to see the dangers that using plain tech security introduce this to your environment, but suffice to say that you really, really need SSL and it’s generally not that hard to set up. So, maintenance.
Michael Madsen:
Yep. A lot of programs, like if we’re talking about Apache Tomcat, will have a default cash usage set up when the program’s installed. Sometimes this works perfectly fine. Other times, you’ll notice that based on your volume that’s either too much or not enough. So just be sure to go in there and adjust it accordingly to your business needs.
John Marney:
And there’s an entire task called right sizing that can help set this up for you. If you have too much cash, that could mean that your data is stale and that you need to be setting up your web server to retrieve fresh information. If you have too little cash, it could cause system slowdown and performance problems for the end user.
Michael Madsen:
Yep. And when we talk about turning down logging, I mean this goes to beyond just the web server as well. So obviously, we want to be sure that we hold onto or cap our logging for something like Apache. But when we build iScripts, a lot of the time when we’re going through the development process or when people go through the development process internally, they may include a lot of extra log lines in their script just for debugging. Sometimes those are not removed. So sometimes your logs are a lot larger than they need to be so just be sure that you’re reviewing those.
John Marney:
And with your web server really carrying the main load of user activity, logging can impact the people who are going to complain the loudest.
You also could explore third-party tools for performance improvements and hardening of your IIS. This is important from both a security perspective and from a performance perspective. Many applications utilize IIS, OnBase app server being one of them. So, on top of that, you want to check the configuration of other types of app servers such as Tomcat, also for hardening for your security’s sake. But Tomcat specifically also has its own garbage cleanup that needs to be configured and it’s not by default.
We also have the allowed memory for the Java virtual machine that Tomcat uses server-side. If that is too low, the application can really suffer and even crash.
Michael Madsen:
Yep. And then what we were talking about with the application installation, there may be certain roles that we need to include on the server. However, when the server is set up, it may automatically install roles that are unnecessary for your needs so sometimes those can combat with other things that you’re trying to run on the server. So, it’s best if you don’t need it to just remove it.
John Marney:
Exactly. It is also a security concern as well if additional features are enabled that you really don’t need.
So, from backup and storage perspective, again, as much as the same as what you might expect, but there’s a couple of considerations. So, you want to keep web server storage to a minimum to reduce any potential performance impact. Your web servers really should not be carrying a ton of local data that should be either in a database or on an application server.
Also, if you’re using load balanced servers or multiple web servers, it can really benefit you to maintain a central storage location that can be a central server or just on a SAN that is accessed by multiple web servers. This allows you to update multiple servers at once with one configuration change, but also simplifies your backup strategy to not have to backup from multiple locations.
Michael Madsen:
Yep. And going back to configurations, it’s always best to keep some kind of configuration backup and keep it off the server where the current configuration is held just because if that server goes down then you’ve lost your backup as well.
John Marney:
Exactly. Okay, so we’re going to move into database servers. And of course, database servers have really two layers, you have the actual database application itself as well as the operating system that it sits on.
So, for installation, and we’re going to break a couple points down between differences between SQL server and Oracle. Other database types of, many of these recommendations are the same, but we really don’t run into those nearly as often. SQL server choose the right version. If you need high availability and you have a high user volume, you’d want Enterprise. Enterprise is also the only version that can be installed on a cluster, so in a majority of the larger implementations that we see you need that, unfortunately expensive, Enterprise license.
Michael Madsen:
Yep. And then with Oracle, if it’s supported of course, be sure that you’re very clear on which Oracle version you’re using. A lot of time what we see is R&D may do a lot of testing against something as common as a SQL server, but then run through certain Oracle tests quickly. And we really need to be specific with the Oracle stuff because we can see a lot of strange functionality there if it’s not exact.
John Marney:
Yep, and you really want to make sure that not just the Oracle version itself and specific patches should be carefully considered, but the actual operating system that Oracle is installed on is supported as well. So even though Oracle as a database application may be supported in, the Unix version or variant that it’s installed on may not be.
And on top of that, install it on a cluster. So, unlike these other types of servers that we’ve been discussing, having to manage your own act of load balancing, having to have multiple instances set up…SQL server and Oracle are natively capable of managing their load distributed across the cluster of servers, so you really want to take advantage of that if at all possible.
And one other item is that even if you need Enterprise license for production, you don’t necessarily need it for your non-production environments. So, you can simplify the setup on those a little bit.
Michael Madsen:
When we talk about maintenance, the nice thing about this maintenance section is some things you’re kind of forced to figure out on your own, but there’s so much documentation around database maintenance like setting up your maintenance plans for specific databases used with specific softwares. So, if you have any questions around any of this stuff, be sure to just let us know, we have so much documentation we can send your way. But specifically around that maintenance plans are very important inside of your database just to make sure that everything stays clean, we don’t run into performance issues, re-indexing doesn’t have an issue. Use defragmentation. Again, just keep everything clean that way when we communicate with it, we don’t run into any problems.
John Marney:
And it’s important that this maintenance is considered after major events with the system. So, if you are performing an upgrade, as soon as that upgrade is complete, we need to make sure our maintenance is run. If you have a massive document import or export, we need to make sure our maintenance is run. If you’re doing some sort of a large workload activity, say for a month end or a year end activity, we want to make sure maintenance is run. Because of the higher transaction volume that happens on your database introduces additional fragmentation on your indexes, which will cause performance degradation.
Michael Madsen:
Yeah. And going along with that point too, be sure that you’re looking at your timing of these events because if we have some large process that we’re running at the same time that we’re running our normal business process throughout the day, we can run into issues. So just keep that in mind when you’re scheduling those events.
John Marney:
And your maintenance, depending on the level of work that it needs to perform on your database, can take hours and extend into user business hours, and you don’t want these things to be running while the users are trying to access the system. So, part of your plan may need to be to implement a series of maintenance steps that attack the worst pieces or most important places first. And in that case, you’re really going to want to look for some advice on building that plan.
Additionally, we recommend you generally auto grow your transaction log space and then truncate them after the backup to avoid overgrowth.
Now, I do want to be clear that this is not Microsoft’s recommendation. Microsoft recommends that you only truncate transaction logs if you absolutely have to. However, all of the software vendors that we work with recommend, for their applications specifically, that you do this. So there may be cases where this makes sense and cases where this doesn’t make sense. For Oracle, you want to review the AWR and ADDM reports and just like with SQL server you want to run your statistics.
On top of this, it’s very important that your database servers are isolated and you are not running other applications with them. If you’re not familiar, if you log into a database OS, you should see all but a very small portion of the memory on that server allocated to the database. And that’s because you want to load as much data as possible into the active memory. So you don’t want anything else competing for resources on these machines.
Okay, backup and storage.
Michael Madsen:
Yep. So as a general rule of thumb, running a daily full backup or frequent incremental backups is always a good strategy. Just having any backup in case there’s some catastrophic failure is good to fall back on.
John Marney:
Yeah, we see log incrementals backed up every 15 minutes, just like I discussed on the application server side.
Grade five or better is recommended for the data of the index and the log files, so really everything that actually drives the database data should be kept as secure as possible.
Just like the application server, keep the SQL drives away from the OS drives. And this is especially important here as these things really need room to grow. If they don’t, everything dies.
Finally, just a general recommendation. You may see the ability to actually shrink a database, which actually shrinks the disk size that it takes up on the server. Don’t use it, it will ruin the database performance.
Okay, disaster recovery. We just have a couple slides real quick. So, disaster recovery is how do we actually overcome and return our application back to the users in the event of some catastrophic failure. That could be manmade, it could be natural, it could be anywhere in between. So, as we discussed earlier, you want to use the high availability server configuration wherever possible and across all server types.
Michael Madsen:
Maintaining at least one application server and database server in a separate data center just means that the data center they’re currently stored in. If internet connection goes down and you may not expect it to come back for even days, if something really terrible has happened, you have something else that you can connect to so that your other workers can at least continue working.
John Marney:
And it is very important that you perform your disaster recovery tests from backup at least once a year, if not more often. If you’ve never done it, then it doesn’t work. We’ve had plenty of clients who thought they had a good backup strategy and disaster recovery plan and felt very secure with it, but when they actually went to implement it when needed, it failed.
Okay, so that is the meat and potatoes of our webinar on the best practices for your server configuration. We’ll take any questions. So, we discussed today the best practices for a number of different server types, application, web and database. And I’m going to go ahead and leave this slide up here for additional webinars upcoming. So, if any of this is interesting, please, please join us. Again, we’ll be back this afternoon to discuss a very related topic on moving a lot of this information into a cloud strategy instead of on premise. So that’s very related to these topics. And then there are some additional resources on our website if you’d like to learn more.
Again, application specific configurations do exist and some of them may counter our best practices listed here, so please reach out if you have any questions specifically.
Okay, it doesn’t look like there’s any questions. So we’re going to leave you with this. If you don’t know who RPI is, we’re 100 plus, probably I think more like 130 plus full-time consultants, project managers, architects, developers, what have yous. Our headquarters are in Baltimore and we have additional offices in Tampa and here in Kansas City. And we offer a broad array of technical and professional services, technical design and architecture, like what we’re talking about in this webinar, new installations, upgrades, health checks, new solutions. So, we’re partners with Infor, and Kofax, and Hyland, however, we work on a lot of different products. So, thank you for joining us today and hope to see you this afternoon.
Michael Madsen:
Thank you.