Author: Chris Sutherland, firstname.lastname@example.org
It is almost time for the annual PEC Educational Conference (March 6 - 9, 2012; Shameless Plug—don’t miss it!) and in preparing for the conference I found a few interesting facts on the subject of exactly how virtualization is catching on. So I thought we might revisit some of these findings and talk about why virtualization is a good choice for most enterprises.
Virtualization continues to be a hot topic in most IT circles and there are many reasons we see this trend growing, as shown in the chart below. In the finance and insurance industries, 44% utilize some form or another of virtualization and 82% plan to use more virtualization for newly acquired servers. And the size of businesses may shock you as well. In those surveyed, 45% of small companies lead the way in virtualization, followed closely by medium and enterprise-sized companies at 37% and 35% respectively, and 35% of commercial-sized companies.
So why are these compainies choosing virtualization? Leading the way among reasons to accept and adopt virtualization are: the ability to reduce hardware costs, followed closely by disaster recovery (DR) improvements. There are so many reasons we could fill up several blogs entries with this information, but let’s focus on one—the ways virtualization can help you improve your DR effort.
So how do virtualization (and VMware) help with your DR plans? First, you must define your plan (you do have one?) What are the most critical applications that need to be online? How long can you go without email? How long can you go without access to documents? These are all questions that need to be determined when assessing your needs. Ask yourself this, “What would you do if your main location were suddenly gone?”
There are several tools available that give you different levels of support for your DR Site. From backup replication to SAN replication you have the ability to meet your desired Response Time Objective for having your servers and application available to your employees. Many factors should be considered when planning for your DR Site, such as the amount of data you have and need to replicate to the amount of bandwidth needed to make sure everything stays up to date. There are also tools available that will assist you with failover procedures for all of your systems, from changing the systems IP addresses to the order in which the servers will boot up and come back online. With the proper configuration and tools you can accomplish the most important function of a DR Site; that is, test it so that you know when you need the site it will be working and working correctly, without affecting your production environment. That means that when a natural disaster occurs, even though we all hope it never does, you have a site you can trust.
So what have we concluded? While no one wants to think about a natural disaster or what they will do if it happens to them, with a defined plan, replication of your data, and the proper tools, you can trust that you are well prepared. So, what is your plan for when disaster strikes?
Author: Dave Foss, email@example.com
We don’t usually use this blog as a forum to talk about ourselves, but please allow me to make this special exception to share with you some significant changes at ProfitStars and the reasons behind them.
In 2004, we began what is now ProfitStars with the vision to provide solutions that can improve the performance of financial institutions and other diverse corporate entities. With aggressive goals for growth and for our clients’ success, we have since combined 18 of the industry’s leading niche solution providers to power the efficacy of more than 11,000 clients. We have come a long way because of grand visions, a collaboration of talent and technology, and loyal, driven clients.
After several years of developing strategic partnerships and delivering solid growth, ProfitStars has matured its vision to improve the financial industry and those closely associated with it. As part of our family, we want you – our clients, partners and associates – to understand our vision as it continues to evolve. For those of you reading with whom we don’t currently have a relationship, we want to familiarize this vision with you as well. That is why we are introducing a renewed brand to better reflect the innovative and growing organization we have become, “Today’s ProfitStars.”
Starting today you’ll notice a new look and feel to our website and overall brand image. Understand that we are the same ProfitStars with which the industry has grown, but it was time for us to update our message to the markets we serve.
Our goal with this brand renewal is to make sure you (and your customers) are more aware of the variety of tools available from ProfitStars to help improve your business. With that in mind, we have reorganized how we present our more than 60 solutions, categorizing services to more clearly communicate what is available and where it fits within your organization. These groupings are Financial Performance; Imaging & Payments Processing; Information Security & Risk Management; and Retail Delivery – determined by our team as the four primary areas most integral to your operations.
We want you to remain market leaders. In that way, expect continued product expansion and maturity from us. The biggest banking trends this year will be fueled, and supported, by technology innovations and we’re always working to stay ahead of those trends. This organization is nimble and on-track with our goals for enabling a more profitable, efficient experience. And that value is being extended to you.
This new beginning can only be successful with your support. We’re here to listen and learn even more about what you want and need, so please continue to provide that feedback. Your unique contributions are what have helped make ProfitStars what it is today, and we look forward to getting you reacquainted with our brand and the solutions available to impact your success.
So, take a look around the new website at www.profitstars.com, and tell us what you think.
Thank you for your participation in developing, launching and always evolving our solutions. We look forward to growing with you and continuing to provide solutions that improve your organization’s overall performance.
Author: Joe Rezac, ALM Services Manager, firstname.lastname@example.org
With football season coming to a close and the seasons changing, my attention has turned to my winter hobby – woodworking. Stepping into the shop after a long hiatus, I’m always amazed by the number of woodworking tools that I’ve accumulated over the years. Despite this, many different tools can often be used for the same job. A hand tool may offer more precision than a power tool, but could take twice the amount of time to use.
Determining reasonable Asset-Liability Management (ALM) model estimates is a lot like woodworking. There should be many tools in your toolbox when you work on your model assumptions, helping you to achieve your goal of coming up with reasonable model results for interest rate risk.
One ALM model assumption that has been in the spotlight recently is the deposit account decay estimate. Until recently, the use of industry-derived decay estimates – like those from the Office of Thrift Supervision (OTS) or the National Economic Research Associates (NERA) – were standard practice in many ALM models and were not overly criticized. Simply using antiquated industry standards are no longer going to cut it. There is no documented rule-of-thumb for what a typical deposit account’s life should be, and there is little regulatory guidance for how to come up with proper decay estimates – other than the method used should reflect the size and composition of your balance sheet. Since non-maturity deposit (NMD) balances play a significant role on the liability side, coming up with a reasonable assumption for NMD life is critical in fair value analysis.
Various approaches can be used to come up with depositor retention based on historical analysis. One popular approach involves looking at a group of accounts that were opened and closed within a particular timeframe and seeing how these account balances change relative to changes in interest rates. But be careful. If you take this approach, don’t stop there. Often this type of analysis results in long retention times. The longer the retention time used for a deposit account, the more of an implied premium exists in fair value calculations, which can skew overall fair value results. Today, long deposit lives are facing strong examiner scrutiny. In many cases, the financial institution simply uses their more rosy study results as a negotiation chip to go back to using the old OTS/NERA decay estimates. When you end up scrapping the results of a third-party study that you paid good money for, that’s an expensive lesson to learn! Equating back to woodworking, that’s like being told you can’t use the new state-of-the art table saw you just bought. Instead, you should revert back to using a hand saw, as it’s less likely to cut off your fingers!
With increased examiner scrutiny a reality no matter which method is used, there are easier-to-understand, less time-consuming, and potentially less-expensive options available. Which options depend in large part on how much historic depositor data is accessible to you. Items such as account open dates, close dates, average balances, and account types can be tracked over time to begin to develop what the depositor behavior is for your institution. Ideally, at least one full business cycle worth of data is needed. If you are limited on historic data, start with whatever is available and track it going forward.
Again, don’t stop there. Your deposit base is dynamic, so the process for tracking its ebbs and flows also needs to be dynamic. Did your institution experience “surge deposits” since the 2008 financial crisis? If so, what percent of each account is represented by this group? Look at data back over 10 years – how many accounts are still open after years one, two, three, etc.? How many are still open today? Now look back at the data over a five-year horizon – what does that tell you? What future events (either internal or macro-economic) will likely change depositor retention from what it was historically? Repeat your process periodically to make sure the information you get from the results stays current. Discuss with your Asset & Liability Committee (ALCO) and examiner to see how the process can be fine-tuned to produce more meaningful estimates.
Once a base set of assumptions is developed, consider estimating how the base assumptions change when market rates change. Traditional statistical analysis has focused on how retention is impacted when market rates go up or down. However, this disregards all of the other macro-economic factors that go into a depositor’s decision to keep money parked at their financial institution or close out the account and do something else with the money.
According to a 2011 J.D. Power & Associates study on U.S. Retail Bank New Accounts, the number one reason people switch banks is life circumstances. Life events such as moving, job loss, birth, death, marriage, and divorce all play a more critical role in a depositor’s behavior than what current market rates are doing. But can this be properly accounted for in a rate-driven model? An argument could possibly be made that the rate of heart attacks increases when market rates drop 300 basis points overnight, but unless you have an actuary on staff, it may be difficult to determine the precise correlation between the two. There is no “easy button” here. How much of a factor rate plays in your depositor retention for each account type is something that your ALCO needs to discuss, document, determine, and refine on a regular basis.
Depending on the size and complexity of your balance sheet, a statistically-based rate analysis may still be part of your procedure for coming up with viable depositor retention estimates, but other approaches should also be looked at as well. At the end of the day, are you comfortable with explaining your methodology to an examiner or a volunteer board member? Whether it’s used for regulatory stress testing or strategic planning, the tool that you are the most familiar with (and is best designed to complete a task) is the one that will most likely achieve the best results for you.
Author: Karen Crumbley, email@example.com
The FFIEC’s Supplement to Authentication in an Internet Banking Environment has been out for over six months now, and it’s fair to say that the new Guidance has seen its share of analysis from the industry at large. At first I hesitated to broach such a topic that has already been the subject of so much focus throughout the latter half of 2011; however, I think there is a “sleeper” directive buried in the content that is being overlooked, inconspicuously hanging out in the Customer Awareness and Education section of the Guidance as follows:
• A suggestion that commercial online banking customers perform a related risk assessment and controls evaluation periodically
So, what does that statement mean exactly? While other items in the education section are prescriptive in nature, clearly requiring that a certain course of action be taken, this statement is somewhat vague. I am skeptical about the word “suggestion” in that statement and have a suspicion that this directive will not be nearly as capricious in nature as it implies. Instead, I believe that examiners may be looking for an action regarding this “suggestion” or prompting in an effort to address this aspect of the Guidance.
Financial institutions (FIs) seem hesitant to recommend a risk assessment of this nature. Among other reasons, some of the uncertainty lies in the fact that they do not want to task a customer with this exercise. The FIs are in a market that competes for the commercial customers’ business and could construe this as potentially burdensome from the customer’s view point.
FIs are accustomed to examiners/auditors’ expectations that they must perform several types of risk assessments, but now the tables are turned, and the FI finds itself suddenly thrust into a new role of being the enforcer. The FI will need to set expectations and provide the commercial customers with some type of framework so that they can conduct a risk assessment themselves. Additionally, the FI will need to guide the customer in determining the methodology, the frequency of this activity, and the way in which the information will be disseminated.
A few compelling reasons why FIs could benefit in this new role:
- FIs can use this task as an opportunity to emphasize the shared responsibility (FI and customer together) for ensuring the security and confidentiality of Non Public Information (NPI) and FI transactions with business customers.
- The FI will gain a risk perspective of each business as a unique entity and “risk rank” each business based on the combination of banking products/services and environment.
- The business entity may gain a comprehensive understanding of the preventative, detective, and response measures involved with each banking product/service and provide a framework for risk aptitude and tolerance for future banking products/services.
If the overarching goal of the Guidance is to ensure that the customer’s non-public information is protected then why wouldn’t an FI implement this education directive and require its commercial customers to participate?
Author: Kevin Moland, firstname.lastname@example.org
Thanks to the FFIEC, the words “layered” and “security” have been permanently welded together. The phrase appears sixteen times (seventeen, if you allow the variation, “a layered approach to security”) in last June’s Supplement to Authentication in an Internet Banking Environment. Since then, the happy adjective and noun have been spotted side-by-side in gazillions of blog posts, white papers, and online security ads; they are part of the same family, like Donnie and Marie; paired for all time, like Snookie and “The Situation.”
On page four of the aforementioned guidance, the FFIEC defines layered security as being “characterized by the use of different controls at different points in a transaction process so that a weakness in one control is generally compensated for by the strength of a different control.” In many of the side streets that feed into the online financial services marketplace, this sentence is being interpreted simply—but incorrectly—as, “Financial institutions need more security.” Those who condense the guidance this way do so at their own peril.
To be fair, the guidance does require “the use of different controls,” which will result in FIs deploying more security measures, but the FFIEC specifically requires that those controls be placed “at different points in the transaction process.” Replacing current fraud prevention tools with new ones (e.g., removing tokens and replacing them with out-of-band phone authentication) may or may not improve a particular checkpoint, but it won’t add new security layers and it won’t meet the goals set forth by the FFIEC. Adding more of the same kind of security (e.g., adding out-of-band authentication in addition to tokens) won’t add a new layer either, it will just make the existing layer fatter. Adding more cheese to your cheeseburger doesn’t make it a different kind of sandwich, it just makes it cheesier.
In addition to deploying fraud prevention tools at different points in the transaction process, the FFIEC further directs that these controls be implemented in a way that ensures “a weakness in one control is generally compensated for by the strength of a different control.” In other words, what the FFIEC really wants is intelligently layered security, where each layer is designed to prevent attacks engineered to defeat other layers.
So how can an FI add new layers intelligently? In the guidance, the FFIEC discusses a plethora of security measures, but it talks very little about the “transaction process” or how to arrange security measures within it. To meet the requirements of the guidance, financial institutions will need to construct an enterprise-wide diagram detailing the flow of their electronic transactions. This flow chart should serve as the foundation for their risk assessment.
The diagram can be built around these online system activities:
• User Login
• Transaction Submission
• FI Review and Processing
• System Administration
Financial institutions should first identify the security measures they deploy today and determine how they are spread across the activities above. They must then evaluate how known threats will fare against those measures. In a perfect world, any attack that defeats a measure in one part of the process will be thwarted by measures in other parts. In the real world, FIs will likely find scenarios where existing defenses are inadequate to prevent certain types of fraud.
Take, for example, fraudsters’ increasing ability to manipulate legitimate online sessions. In this type of attack, malicious entities observe system traffic unnoticed until after a user has logged in to the system. Once the user establishes a valid session, the fraudster, via embedded browser “add-ins” (Man-in-the-Browser) or by setting himself up as a proxy service (Man-in-the-Middle), assumes control of the session and submits fraudulent transactions. This type of attack takes place after user login, circumventing the strong authentication tools most FIs added in response to the FFIEC’s original 2005 guidance. Adding more user authentication measures during login won’t prevent this kind of fraud. What will help is establishing new controls in the transaction submission phase, such as dual control, velocity limits, or additional out-of-band approval for transactions sent to accounts not previously targeted by that business. Anomaly detection tools deployed in the reviewing and processing phase will further protect against these types of attacks, as will customer-installed, FI-endorsed security modules designed to police the user’s PC.
Using this type of approach, financial institutions must examine how each threat fares against their security measures during each phase of the transaction process. FIs that do this will be able to identify “holes” in their current prevention plans. Once an FI understands where its security measures fall short, it can take action to strengthen weak areas.
In summary, “layered security” isn’t just about adding more stuff. It’s about adding the right stuff in the right places. FIs that intelligently arrange their layered security measures will have nothing to fear from examiners and, more importantly, their customers will have less to fear from fraudsters.