Content Manager Memory Leaks-Guest Post


Some time ago Paul and I were working together as Cognos Infrastructure specialists for a certain company. During that time, one day the Cognos development environment became unstable, and we were struggling to get it up and running again.

After the day’s events I wrote an email explaining these events to the company’s developers. At Paul’s request I am publishing the main points from this email here. I think they can be educating for a particular sort of Cognos instability, often seen in dev environments.

Mostly, I’m talking about crfashes which are a result of Cognos’ Content Manager reaching maximum memory, and then shutting down. Without the CM there can be no authentication, without authentication there’s no navigation. Also, when a server reaches maximum available memory, it will become slow and non-responsive.

 There are several reasons why a CM Server would max out on memory. One of them is a bug in Cognos 10.1.1 to be fixed in the nearest fix pack (This is really a Java bug, and the JRE was updated in FP1). This bug causes a “Java.Lang.OutOfMemory” error on login. It’s a slow, steady memory leak that breaks out every now and again.

Another well documented reason for a CM server to crash and burn for lack of memory is when requests are piled up in the memory faster than the memory cleans itself. Imagine the following scenario: A report executes the same query 50 times. The query has an error, so the error is written to the log and also to the audit database. The CM is in charge of Audit, so the CM gets 50 requests for Audit update in less than a minute from this report. The CM keeps all these requests in memory, where they are kept not until they are performed, but until the memory is cleaned. This behavior is a result of the requests being stored the Cognos Java process, which is only cleaned once every n minutes.

Now, the developer running the report gets an error or false data, makes a slight change and runs it again. And so on.

If the CM’s memory get filled before a cleanup is made, the CM will max out and will pretty much play dead.

It is very difficult to say, just by analyzing Cognos’ logs, which report or group of reports are responsible for this. If you work with a few developers, it’s not tough to cooperate with them to find the culprits. But the case may be that you’re working with dozens of developers, some not even on site. However, not all hope is lost. You can scan the logs looking for reports that were run over and over and resulted in errors to the logs. To verify, try hiding suspected reports via security, and see what happens. If this works, you’ll have to have a talking-to with the report’s developer. Luckily, since you hid their reports, they are likely to contact you, so you don’t have to search for them.

And as for developers, as a general rule, if an environment is unstable and you are running reports that have been recently developed or that get stuck whenever you run them or that return error messages, the two things may be connected. Your report might, unintentionally, be causing the instability. Alert whoever it is you need to alert.


The author, Nimrod Avissar, is a Cognos Architect and Cognos Team Leader at Data-Mine Israel. 


Information Security, Risk Management and Back End – Guest Post

Paul is my good friend, a Cognos guru and a colleague for a while now, and I’m reading his blog regularly. When I read his latest post, regarding information security in Cognos reports, I felt that, untypically, some points in the post need refinement. So I asked Paul if he would kindly agree to host my claims, and he generously accepted. So, without further ado, here are my thoughts on Paul’s post.

First of all, it should be noted Paul is absolutely correct on two points: Data security should never ever be applied in report level, and data security should always be applied in the back end. However, I’m not sure that the possible leak via token prompts constitutes a viable security threat that requires extreme measures, and I am quite certain that Cognos’ Meta Data modeling tool, Framework Manager, is not really the back end of the system. Here’s why.

Information Security as Risk Management

We hold these truths to be self-evident, that all systems are vulnerable security-wise. A server is seldom hardened enough, communication is seldom encrypted enough, and passwords are rarely well kept enough to prevent a malicious hacker with intranet access from stealing data, changing data or plainly viewing data they’re not supposed to. Imagine any IT system you know, and imagine a user at least as savvy as you are, and entirely malicious and very keen to do one thing only: To view data they’re not supposed to. If that is the mental picture you  have when securing your system, you will need to double your investment in information security just to start tackling it.

But we cannot ignore two facts: One, users are seldom malicious and not very often are they savvy; and two, not all data is worth the investment of protecting. Let’s start with the first point: Users aren’t very often savvy. Users – analysts, professionals, economists and so on – are usually professional people who were hired to perform tasks they’ve specialized in. Most of them – sans IT people – did not spend their adult years learning computer systems, data structures, communication protocols and password policies. Some of them may be proficient, but very little of them will be hackers, or at the very least good hackers. Those who are, if they exist, will also know they would be the first suspects in any leak. Which brings me to the second part of this point: Users and systems do not exist in vacuum. They exist within an organization. The organization deals with many of the threats internally: There’s the risk of getting shamefully, never-to-be-hired-by-anyone fired, the risk of being sued – in short, the risk of getting caught. A system that is openly monitored can, just by the act of monitoring, seriously deter people from trying to sneak a pick at their colleagues’ salaries in the HR system. On top of that, most people come to work in order to do an honest job and get an honest pay. The majority of the people won’t attempt to hack a system or to access restricted data not just because they don’t know how to, but also because they have no reason to, because the bottom line is that most people are not evil.

The second point was that while we obscure and hide data from unauthorized users, not all data should be protected the same. A company’s customers’ credit card numbers should be protected for dear life. Their marital status, not as much. The extreme example is if we’re going to spend 10 working hours patching up a potential leak , the damage of which will be cheaper than 10 working hours.

So, when determining how to allocate our information security resources, we consider the feasibility of the loophole being found and utilized, and the sensitivity of the data, against the different costs (In terms of user friendliness, performance, labour hours and so on) of patching up the potential threat. In other words, we assess the risk and decide whether it’s worth addressing, and to what level.

If to return to Paul’s original post, while he had found a very elegant security hole with token prompts, I think in most cases it would be the kind of breach we wouldn’t invest, normally, in blocking. Even after reading and understanding Paul’s post, most users will not know how to identify that a report has a token prompt or how to make use of it. And even if they did, most users are not fluent in the database structure and SQL possibilities. If they were, we’d be out of a job. This isn’t security by obscurity because we do not assume data is secure, only that hacking it is unfeasible. On the other hand, the solution Paul offered – to use a parameter map – is costly on several levels: First, it requires republishing a package for every new token prompt, which is cumbersome and may have an effect on other reports, especially if the model is constantly developed. Also, it prolongs development times. It should also be noted that large parameter maps force Cognos to laboured local processing, thus affecting performance. On the other hand, we are talking about users who are trusted to view a certain report based on a certain package, and who are not very likely to find that breach and make use of it. So, in my opinion, unless the kind of data that can be potentially exposed is extremely sensitive, to an extent that no threat, no matter how unfeasible, can be tolerated, it isn’t worth the investment.

Framework As Middle Tier

But suppose that the data I’m protecting is the kind of data that simply cannot be risked, at any cost. This could be the case, for example, if I’m required to keep certain data safe guarded by law, under heavy penalties, or if a leak will cause publicity damages, or in the case of industrial secrets and so on. I would still argue against Paul’s solution, because Paul was right to assert security is a matter for back end, and Framework Manager is not back end.

Cognos web portal and viewer are certainly the front end. They handle data visualization, navigation and object level security (Which user should be allowed to see which reports). As mentioned earlier, they should never handle data level security. The back end is the database itself, where data is stored for safekeeping. Framework Manager is a middle tier between the front end and the back end, handling the query logic. It could be thought of as a logic engine. Data-level security isn’t normally a part of the logic, as opposed to object level security (Which fields/query subjects will be available to who), because the logic is applied on the available data. Having the same tier that manipulates the given data also decide which data to manipulate is opening the door to a host of problems. Why? Because we’re making security a part of the logic rather than an infrastructural thing, and that means we’re tying security and logic together. Any changes to each might invalidate the other. Translate tables via a parameter map for security reasons, and you’re adding relationships which might affect existing ones. Change the relationship of a few query subjects, or add fields, and you may be opening a new security leak.

Which is why if your data is very sensitive, you need to secure it in the database level. There are several ways of doing that – you could use data source command blocks to pass the user’s data to the database on every session, for either logging or identification purposes. With SQL Server, you have built in Single Sign On abilities you can facilitate. With Oracle you can implement Single Sign On using a proxy user, and if further security is required, that proxy user’s credentials can be further secured by SEPS, and the usage of alter user grants can ensure that while logged in via proxy, only certain actions are allowed.

To conclude, I believe the token prompt loophole Paul found is in most cases not worth the efforts securing. When it is – because no risk is a small risk with certain types of data – security should be implemented on the database level, not on the Framework level. But this isn’t just about Paul’s example: This is the proper way to tackle any security gaps that come up, either by a survey or from experience. Evaluating the risk to begin with, and taking proper action in the proper level eventually.

The author, Nimrod Avissar, is a Cognos Architect and Cognos Team Leader at Data-Mine Israel.