Saturday, November 13, 2010

Differentiating New Record & Update Record

There are cases when a particular data validation or Business  Process invocation should only occur when  a new record is being written down to the database, but not when an existing record is being updated. Or it may be the other way around, but the Siebel developer has to figure out what operation is happening.  I have found many ways these can be achieved, here are my favourite three.

1.       Using a Boolean flag in script.  This is how I first implemented such a scenario.  I was new to Siebel and hadn't yet mastered all the scripting events .  Easiest way out seemed to declare a Boolean flag in the declarations section of the Buscomp server script, and giving it a default value of false. Then in NewRecord  event,  this flag should be turned to true. Finally, in the PreWriteRecord or WriteRecord event, the value of this flag could be checked, and new records can be differentiated fromo exisiting records.

2.       RunTime events. This is a No-Scripting approach to the same problem. In Runtime events, if you select the buscomp events supported, it can be seen that Siebel now provides three events for WriteRecord operation :

·         WriteRecord : Triggered everytime after records are committed.

·         WriteRecordNew : Triggered only when new records are committed.

·         WriteRecordUpdated: Triggered only when existing records are updated.

But these events only occur AFTER the records are committed. They can't be used to do validation/invocation before records are committed.

3.       IsNewRecordPending. This is a new specialised Busines Component Method, meant for EBCs, documented here.  But on trying the command out, I found that it works perfectly well in normal business components as well.  This method can be invoked from any other event in BC level, but it only makes sense to invoke it in the PreWriteRecord section.

var isNewRecord = this.InvokeMethod("IsNewRecordPending");

This way new records can be differentiated from old records before they are committed to the system.

Friday, October 15, 2010

Declarative Alternatives to Using Siebel Scripting (Doc ID 477842.1)

Maybe you already know this, but I there is a document on supportweb which documents many alternatives to scripting. Its Doc ID 477842.1 available here:


It also points to another document which discusses the data validation manager . 478298.1



Wednesday, September 1, 2010

Validating multiple Datamappers

We were facing these strange error during ADM import on target system:

Source component % does not exist.(SBL-EAI-04063)

This was coming when we tried to import the datamappers inot target system.We follow a continous deveopment model with numerous integrations, and the external IOs for them keep on changing. The problem occurred because the new WSDL imported did not contain some IC levels which were developed and mapped earlier. But the problem was complicated because now we did not know which datamapper to check.

Now there is a very useful button which could help us here. On the Datamap administration view, there is a validate button on the top applet. It checks the structure of the mapped IOs with the ones compiled into the SRF and threw up validation errors. For some time now I have been wondering the functionality behind this button.
I found this in the Siebel log files when the button was clicked:

Begin: Business Service 'EAI Data Transformation Engine' invoke method: 'Validate'

But if you check the definition of this BS in tools, you will not fin the Validate method. But after a little more tweaking, I was able to figure out the input parameters. One thing I found was that if the Datamap is valid, the BS does not return or throw any message, and if there is any validation error, it throws an exception. Hence if multiple datamps need to be validated, the try catch loop must be put inside a loop. I wrote a simple script at client side services which validates multiple datamaps in one go. The search spec can be modified according to your project requirements.

function Service_PreInvokeMethod (MethodName, Inputs, Outputs)
var oBCDataMap = "";
var sMessage= "";
var iCount = 0;
oBCDataMap = TheApplication().GetBusObject("EAI Data Map").GetBusComp("EAI Object Map");
oBCDataMap.SetSearchSpec("Source Object Name","*");
var oBSDTE = TheApplication().GetService("EAI Data Transformation Engine");
var psInput = TheApplication().NewPropertySet();
var psOutput = TheApplication().NewPropertySet();
var bIsRecord = oBCDataMap.FirstRecord()
sMessage = sMessage + oBCDataMap.GetFieldValue("Name") + ":" +e.toString() + "\r";
iCount = iCount + 1;
bIsRecord = oBCDataMap.NextRecord();

sMessage == "" ? TheApplication().RaiseErrorText(iCount + " Datamaps validated successfully.") : TheApplication().RaiseErrorText(sMessage);
return (CancelOperation);

When the code is execute, it validates all Datamaps using the EAI Data Transformation Engine BS's Validate method, and presents the errors in a message box.

You can hit Ctrl-C now to copy this message, and paste this in notepad.
AMSRefData_ServiceCenter:Parent component map 'q' not found.(SBL-EAI-04061)
AMSRefData_Service_Func:Parent component map 'q' not found.(SBL-EAI-04061)
What'ya think ?

Tuesday, August 17, 2010

OpenSolaris is Dead

Oracle just sued Google for patent infringement on the Android system. And now OpenSolaris is dead. Here is a company with an aggressive  business strategy buying and consuming everything in its way. Now it has eaten its way into the very principles of Open Source. Here is an article regarding the OpenSolaris OS.

OpenSolaris is Dead.

via Planet OpenSolaris by Steven Stallion on 8/15/10

What follows is an email sent internally to Oracle Solaris Engineers which describes Oracle's true intentions toward the OpenSolaris project and the future of Oracle Solaris.

This concludes over four years that I (and many other external contributors) have worked on the OpenSolaris project. This is a terrible sendoff for countless hours of work - for quality software which will now ship as an Oracle product that we (the original authors) can no longer obtain on an unrestricted basis.

I can only maintain that the software we worked on was for the betterment of all, not for any one company's bottom line. This is truly a perversion of the open source spirit.

Solaris Engineering,

Today we are announcing a set of decisions regarding the path to
Solaris 11, and answering key pending questions on open source, open
development, software and binary licenses, and how developers and
early adopters will be able to use Solaris 11 technology before its
release in 2011.

As you all know, the term "OpenSolaris" has been used colloquially to
refer to any or all of a collection of source code, a development
model, a web site, a logo, a binary release, a source license, a
community, and many other related things. So it's taken a while to go
over each issue from an organizational and business perspective, and
align on the correct next step. Therefore, please take the time to
read all of the detail here carefully. We'll discuss our strategy
first, and then the decisions and changes to our policies and
processes that implement that strategy.

Solaris Strategy

Solaris is the #1 Enterprise Operating System. We have the leading
share of business applications on Solaris today, including both SPARC
and x64. We have more than twice the application base of AIX and HP-
UX combined. We have a brand that stands for innovation, quality,
security, and trust, built on our 20-year investment in Solaris
operating system engineering.

From a business perspective, the purpose of our investment in Solaris
engineering is to drive our overall server business, including both
SPARC and x64, and to drive business advantages resulting from
integration of multiple components in the Oracle portfolio. This
includes combining our servers with our storage, our servers with our
switches, Oracle applications with Solaris, and the effectiveness of
the service experience resulting from these combinations. All
together, Solaris drives aggregate business measured in many billions
of dollars, with significant growth potential.

We are increasing investment in Solaris, including hiring operating
system expertise from throughout the industry, as a sign of our
commitment to these goals. Solaris is not something we outsource to
others, it is not the assembly of someone else's technology, and it is
not a sustaining-only product. We expect the top operating systems
engineers in the industry, i.e. all of you, to be creating and
delivering innovations that continue to make Solaris unique,
differentiated, and valuable to our customers, and a unique asset of
our business.

Solaris must stand alone as a best-of-breed technology for Oracle's
enterprise customers. We want all of them to think "If this has to
work, then it runs on Solaris." That's the Solaris brand. That is
where our scalability to more than a few sockets of CPU and gigabytes
of DRAM matters. That is why we reliably deliver millions of IOPS of
storage, networking, and Infiniband. That is why we have unique
properties around file and data management, security and namespace
isolation, fault management, and observability. And we also want our
customers to know that Solaris is and continues to be a source of new
ideas and new technologies-- ones that simplify their business and
optimize their applications. That's what made Solaris 10 the most
innovative operating system release ever. And that is the same focus
that will drive a new set of innovations in Solaris 11.

For Solaris to stand alone as the best-of-breed operating system in
Oracle's complete and open portfolio, it must run well on other server
hardware and execute everyone's applications, while delivering unique
optimizations for our hardware and our applications. That is the
central value proposition of Oracle's complete, open, and integrated
strategy. And these are complementary and not contradictory goals
that we will achieve through proper design and engineering.

The growth opportunity for Solaris has never been greater. As one
example, Solaris is used by about 40% of Oracle's enterprise
customers, which means we have a 60% growth opportunity in our top
customers alone. In absolute numbers, there are 130,000 Oracle
customers in North America alone who don't use our servers and storage
yet, and a global customer base of 350,000 (the prior Sun base was
~35,000). That's a huge opportunity we can go attack as a combined
company that will increase Solaris adoption and the overall Hardware
server revenue. Our success will also increase the amount of effort
ISVs exert optimizing their applications for Solaris.

We will continue to grow a vibrant developer and system administrator
community for Solaris. Delivery of binary releases, delivery of APIs
in source or binary form, delivery of open source code, delivery of
technical documentation, and engineering of upstream contributions to
common industry technologies (such as Apache, Perl, OFED, and many,
many others) will be part of that activity. But we will also make
specific decisions about why and when we do those things, following
two core principles: (1) We can't do everything. The limiting factor
is our engineering bandwidth measured in people and time. So we have
to ensure our top priority is driving delivery of the #1 Enterprise
Operating System, Solaris 11, to grow our systems business; and (2) We
want the adoption of our technology and intellectual property to
accelerate our overall goals, yet not permit competitors to derive
business advantage (or FUD) from our innovations before we do.

We are using our investment in core Solaris innovation and engineering
to drive multiple businesses, through multiple product lines. This
already includes our Solaris operating system for Enterprise, and our
ZFS Storage product line, and will soon include other Oracle
products. This strategy is all about creating more value from a set
of common software investments: it makes everything you do more
valuable and used by more people worldwide. It also means you as an
individual engineer or manager have an even greater responsibility to
understand the broader business and technical contexts in which your
engineering is deployed.

Solaris Decisions

We will continue to use the CDDL license statement in nearly all
Solaris source code files. We will not remove the CDDL from any files
in Solaris to which it already applies, and new source code files that
are created will follow the current policy regarding applying the CDDL
(simply, that usr/src files will have the CDDL, and the very small
minority of files in usr/closed might not have it). Use of other open
licenses in non-ON consolidations (e.g. GPL in the Desktop area) will
also continue. As before, requests to change the license associated
with source code are case-by-case decisions.

We will distribute updates to approved CDDL or other open source-
licensed code following full releases of our enterprise Solaris
operating system. In this manner, new technology innovations will
show up in our releases before anywhere else. We will no longer
distribute source code for the entirety of the Solaris operating
system in real-time while it is developed, on a nightly basis.

Anyone who is consuming Solaris code using the CDDL, whether in pieces
or as a part of the OpenSolaris source distribution or a derivative
thereof, would therefore be able to consume any updates we release at
that time, under the terms of the CDDL, LGPL, or whatever license

We will have a technology partner program to permit our industry
partners full access to the in-development Solaris source code through
the Oracle Technology Network (OTN). This will include both early
access to code and binaries, as well as contributions to us where that
is appropriate. All such partnerships will be evaluated on a case-by-
case basis, but certainly our core, existing technology partnerships,
such as the one with Intel, are examples of valued participation.

We will encourage and listen to any and all license requests for
Solaris technology, either in part or in whole. All such requests
will be evaluated on a case-by-case basis, but we believe there are
many complementary areas where new partnership opportunities exist to
expand use of our IP.

We will continue active open development, including upstream
contributions, in specific areas that accelerate our overall Solaris
goals. Examples include our activities around Gnome and X11, IPS
packaging, and our work to optimize ecosystems like Apache, OpenSSL,
and Perl on Solaris.

We will deliver technical design information, in the form of
documentation, design documents, and source code descriptions, through
our OTN presence for Solaris. We will no longer post advance
technical descriptions of every single ARC case by default, indicating
what technical innovations might be present in future Solaris
releases. We can at any time make a specific decision to post advance
technical information for any project, when it serves a particular
useful need to do so.

We will have a Solaris 11 binary distribution, called Solaris 11
Express, that will have a free developer RTU license, and an optional
support plan. Solaris 11 Express will debut by the end of this
calendar year, and we will issue updates to it, leading to the full
release of Solaris 11 in 2011.

All of Oracle's efforts on binary distributions of Solaris technology
will be focused on Solaris 11. We will not release any other binary
distributions, such as nightly or bi-weekly builds of Solaris
binaries, or an OpenSolaris 2010.05 or later distribution. We will
determine a simple, cost-effective means of getting enterprise users
of prior OpenSolaris binary releases to migrate to S11 Express.

We will have a Solaris 11 Platinum Customer Program, including direct
engineering involvement and feedback, for customers using our Solaris
11 technology. We will be asking all of you to participate in this
endeavor, bringing with us the benefit of previous Sun Platinum
programs, while utilizing the much larger megaphone that is available
to us now as a combined company.

We look forward to everyone's continued work on Solaris 11. Our goal
is simply to make it the best and most important release of Solaris

-Mike Shapiro, Bill Nesheim, Chris Armes


Wednesday, August 11, 2010

Buggy software: Why do we put up with it?

Buggy software: Why do we put up with it?

This is a guest post by Michael Kassner via TechRepublic's IT Leadership blog.

During one of the breakout sessions at TechRepublic's Live 2010 Conference this past week, I was questioning why we put up with software that has bugs and vulnerabilities. To IT-security types like me, it's a concern. Eliminate bugs and you shut the door on most malware.

After that particular breakout session, Toni Bowers, Head Blogs Editor for TR, and I talked about my concerns. She suggested that I pass what I learned on to you. So, here goes. I cajoled the "software savvy" TR writers into answering the following question:

Consumers would never accept a car or other traditional goods that are flawed, yet they are willing to buy software that is. Why do you think that is?

Here are their answers. I hope you find them as interesting as I do:

Chad Perrin

The question of why software vendors produce buggy software and why consumers accept it has no simple answer. The reasons vary from incompetence plus overconfidence to being the dominant business model at the other extreme. Here are some of my thoughts:

  • The dominant business model in the software industry is one that creates and relies on otherwise unnecessary complexity. That complexity both creates bugs and hides them from view. Paraphrasing C. A. R. Hoare, there are two ways to build software: Make it so simple that there are obviously no bugs, or make it so complex that there are no obvious bugs. The former is much more difficult and does not lend itself well to enticing people to upgrade to the next version.
  • People are so focused on feature marketing that they do not stop to think about bugs until it is too late. After generations of this, and of the problem getting worse all the time, end users have developed a sort of Stockholm Syndrome with regard to buggy software. They believe it is normal, expected, and inescapable.
  • Features and bugs act very similarly a lot of the time once software exceeds a particular level of complexity. They do things that are surprising, or at least unexpected. People grow used to this until they become inured to surprise without the surprising behavior being reduced at all — in fact, it only gets worse. "It's a feature, not a bug" starts to sound reasonable and believable.

Chip Camden

Having worked in auto parts for several years, I can tell you that very few cars roll off the assembly line without any flaws. That's why they have a thing called recalls.

Furthermore, a serious flaw in an automobile can cost someone's life. That usually isn't the case with software, and where it is the case (medical, missile guidance, aircraft navigation), then the extra expense of a higher attention to flawlessness is considered worthwhile.

Ultimately, it's market-driven. We could make software that performed to much more exacting tolerances, but it would be much more costly. The buying public is content to pay a near-zero cost for "good enough" rather than putting a dent in their wallets for "flawless." [Editor's note: you can read more from Chip Camden in TechRepublic's IT Consultant blog.]

Erik Eckel

I think the software industry is very different from most any other. Vendors must try writing software that will work on multiple platforms (Linux, Windows, Mac) and be used by a variety of users with greatly differentiated skill levels at companies working in numerous different industries. That's a pretty tall order.

Imagine trying to make a car that could be driven by a 5′4 woman or 6′5″ man that could run on gasoline, diesel, or propane; while also possessing the ability to carry up to eight people or 6,000 pounds of payload. Oh, and it must get 28 miles to the gallon and cost less than $25K and go 100,000 miles between tune-ups.

You couldn't do it!

So, I feel for software manufacturers. The Intuits, Microsofts, Apples, and Symantecs of the world have a wide constituency to satisfy. Someone's always going to be complaining.

I think the 37signals guys may have it best. In their current best-seller ReWork, they note that one of the keys to their success is saying no to customers and limiting the amount of features they include in their software programs.

I think there's a lesson there for all of us. [Editor's note: You can read more from Erik in TechRepublic's Macs in Business blog.]

Jack Wallen

The answer is very simple: Marketing. If you ask the average consumer (those who buy the bulk of computers) if they knew there was an operating system out there far superior, safer, and more reliable than the one they used AND it was free, they would react with surprise. Their first question might be "Why didn't we know about that?" The reason is because Microsoft is a HUGE company with a HUGE PR budget and the ability to shove advertising down the throats of the consumers.

To continue with your analogy:

Tesla has a roadster that is 100% electric, can go over 300 miles on a single charge, can go from 0 to 60 in 3.7 seconds — yet the majority of people don't know about it. Why? Marketing. If one Linux company could start to produce witty, well-done television commercials things would quickly change.

But think about this: Linux has done fairly well for itself without having to spend a penny on advertising (relatively speaking). Word of mouth has been a powerful alley to the Linux operating system. However, in order to raise it to a higher level, PR and marketing will have to be used. [Editor's note: You can read more by Jack Wallen in TechRepublic's Linux and Open Source blog.]

Justin James

Some thoughts that come to mind (as someone struggling with a phone heralded by others and the media as a "miracle phone," but it is plagued with problems):

  • "No warranty, express or implied" is attached to every piece of software ever made and is enforceable. Consumers know that they have zero rights, so they feel happy when it works.
  • "Gadget lust" blinds people to issues. People don't want to admit that they bought a piece of junk, so they just deal with the problems and tell everyone how much they love the software/device/etc.
  • In corporate environments, the people who live with the bad software are often the people who do not pick it. Those who did select it sweep the problems under the rug because it makes them look bad, or they feel it's a question of "stupid users" who "just don't get it."
  • Too many problems do not appear until whatever initial return period of contract cancellation period is over.
  • People expect to have problems.
  • People assume that the problems are their own fault ("I'm too dumb to use this right!").
  • In corporate environments, many products require a lengthy and expensive integration process; there is no way to accurately judge their quality until that is done, and afterward, it is often not clear if the base product or the integration work is the root cause of problems. To make matters worse, once you dump, say, $150,000 into customizing a $200,000 package that you spent $50,000 on hardware to support, do you really want to say, "gee, it looked good when we started, but this is a dud, let's dump it"?

Overall, it's a combination of people feeling helpless on the user end of things, and the decision makers being unwilling or unable to do anything about it once a commitment is made. [Editor's note: You can read more by Justin James in TechRepublic's Programming and Development blog.]

Patrick Gray

I think there are two factors at work that would cause me to question your premise:

Perceptions of software "flaws" are often based more on market saturation than technical elegance.

Most mainstream technical products (hardware and software) seem to have a higher incidence of flaws because they have a higher user base. This is the classic "Windows is buggy versus [a more obscure OS]" argument.

I don't think Windows is inferior, it's just a mass-market product and thus gets used and abused by the highest percentage of the population. Because Mac OS X has gained traction, it's now getting hit with malware as more people use the software rather than due to some inherent flaw.

There are considerations that outweigh flawed products, mostly getting valuable features early.

I think technical elegance often becomes second fiddle to other concerns at both a corporate and personal level. Why? We want new features and are willing to put up with partially baked software. This extends to your automotive analogy as well.

I bought a new motorcycle from BMW in its first model year (ever hear the old bromide never to buy the first model year vehicle?). The bike has had four recalls, including replacing the front axle (a front axle failure at 80 mph would be bad). Despite this product having flaws, the trade-off of having an extra year's riding was worth it to me.

If we all wanted perfect and bug-free code, first and foremost, we'd probably all be running MS DOS or a text-based Linux that hadn't had any features added in a decade. [Editor's note: You can read more by Patrick Gray in TechRepublic's IT Leadership blog.]

Rick Vanover

While software quality should be the first priority in whether or not we implement something, many times IT customers have their hands tied. Simply forgoing a piece of software if all offerings will not meet their needs will not be an option.

The natural alternative is to develop something in-house, but that too, may be cost prohibitive. This is an age-old battle of having our hands tied in a way to get pushed along to new products, and history has done nothing but continually confirm this for us.

One example is the file server market, Novell NetWare is still a superior file server product to Windows NT, 2000, 2003, or 2008; yet we all know which way the market and broader supported configurations went. There is no simple answer on how we can address this, in my opinion. [Editor's note: You can read more by Rick in TechRepublic's Network Administrator blog.]

Final thoughts

It seems we the users want the latest and greatest software, even if it means accepting buggy code. Do you agree with the TR gurus? I know we all are anxious to learn your opinions, so let them fly.

Chad Perrin wanted me to mention that he has a lot more to say about this subject. Please look for his article in the IT Security blog of TechRepublic.

Excerpted from Buggy software: Why do we put up with it? ZDNet

Monday, August 2, 2010

Parent Child Relationships in VBCs - BC User Property : "Populate Child Data"

Virtual Business Components provide for a nice way of presenting data via WebServices. Each time the VBC is loaded/refreshed in the UI, the integration is triggered behind it, and resulting response appears in the UI. But what if there is a parent child relationship in the data being returned and the requirement is to present each set of the hierarchy in a separate applet ? In normal BCs, the data is pulled from the tables, and if links are provided, the child BC gets refreshed when the parent BC is changed. How can we achieve the same in VBCs ? There is a little unknown BC userproperty “Populate Child Data”, which can be used to achieve this nice effect, without scripting, or repeatedly invoking the webservice. Here is what you do.


·         Create the two VBCs and the two applets for them. Also created the BO and View based on the BO.

·         Both the VBCs must refer the same IO,either via the BC userpoperty “Outgoing Integration Object Name”, or the IO name gets hardcoded in the WF.

·         The IO should have two ICs, one for parent level and one for child level in response data. The external names must be correctly populated with the BC names created.

·         Link must be created between parent and child BCs. This BC has to be added in the BO.


Now if everything is correctly configured, compile everything and fire up the client application. Navigating to the view will bring the data correctly in the two applets, correctly maintaining the parent child relationship between them. Moving to the next record at parent level will refresh the child level data, as the link is in context.


Now go ahead and check out the integration logs for this integration. There is a nice tutorial here.


You will find that the webservice integration was actually triggered twice, once for each level of hierarchy. The same request goes out and the same response comes back, so it means double the work. Now :


·         Go to the child BC you created. Add a new BC userproperty :

·         In the name column, provide : “Populate Child Data”

·         In the value column, provide : “Y”


Compile the BC and load the view again. Data still comes as expected in the UI alright, but if you check the integration logs, you will see that the web service was invoked only once.


BC user property “Populate Child Data” doesn’t look documented in bookshelf or supportweb. If you are not happy with this approach, there is a scripting alternative suggested on supportweb here.




Sunday, August 1, 2010

Nice process, but what about the engineering bits?

Nice process, but what about the engineering bits?

via by Ayende Rahien on 2/19/10

Software processes has always been a popular topic of discussion in our industry. Those can get quite heated, with advocates of the "stable / stale" Waterfall method pointing fingers toward "rapid / rabid" Agile methods, with the CMMI people throwing documents around and Lean people standing on the sidelines muttering about Waste.

This isn't a post about a specific software process, I'll defer that to another day. Instead, I want to focus on a flaw in the basic building blocks in many* software building processes.

They ignore the actual building the software.

That may sound ridiculous on the face of it, after all, how can a software process ignore the act of building software. But take a look at the following diagrams:


If you'll pay attention, you'll notice that those processes talk about everything except how to actually build software. They talk about people, about requirements, about managing customers, about a whole lot of things, but not about the part where you have people sitting down and writing code. In most of those, in fact, that part is usually defined as one of those:


Why is that a problem? After all, isn't there a big distinction between software engineering (we know what to do, now let us do it) and project management (getting to know what we need to do, and verifying that we did it right). Those processes deal primarily with project management and leave the engineering part to be defined in a way that fit that particular project. Surely that is better, right? In theory, it might be. But there is a big problem when you have a software process that ignore the software engineering aspects of building software.

The problem is that that in many cases, there are hidden assumptions that are going to hammer you down the road if you use a certain process with engineering practices that doesn't fit it. Take a look at the following chart, showing a team velocity over time, does this look familiar?


The term I heard used for this is Scrum Wall, but I have seen similar results in other processes as well. The best description for that is Allan Kelly's:

You hit the Scrum wall when you adopt Scrum and everything goes well, then, after a few Sprints things don't work any more - to use an English expression, they go pear shaped. You can't keep your commitments, you can't release software, your customers get annoyed and angry, it looks like Scrum is broken.

This is what happens when you adopt Scrum without technical practices such as Test Driven Development, continuous integration and refectoring. When teams adopt the Scrum process, they go faster, show progress, things look good... and then the quality becomes a problem. Now the team are fighting through quick sand.

The code quality is poor and developers are expected to continue to make progress. Maybe the Scrum Master/Project Manager reverts to past behavior and demands overtime and weekend working. Maybe the team start busting a gut to keep their commitments. Either way the team is heading for burn-out.

The major issue is in focusing so much effort and time on project management with what amounts to willful ignorance of the technical and engineering practices will inevitably leads to disaster. The process of building software is intractably linked to the engineering practices involved in building the software. Moreover, some technical practices are actively harmful in some scenarios and life savers in others.

Many Agile and post-Agile processes focus on short cycles, each of them producing something with a distinct value to the customer. That may be an iteration, a commit or a feature, where the goal is to increase the velocity over time so we can provide as much value to the customer in as short a time as possible. What those processes ignore are things like technical debt, large scale refactoring and non functional efforts. Oh, you see those things mentioned on the edge, but they aren't something that is dealt with heads on, as a core issue to consider.

There is a bit more to that, actually. The software engineering practices and the project management strategies are linked and of paramount importance when the time comes to decide how the software should actually be built. No, this is not tautology. We just need to take into account Conway's law and expand on it a bit.

Any organization that designs a system will inevitably produce a design whose structure is a copy of the organization's communication structure.

Part of the design process of a project should include design the team(s) structure, the project management strategy and the software engineering practices in order to align the end result with what is desired. Ignoring this leads to imbalance in the project, and if that imbalance is big enough, and goes on for long enough, the project is going to rip itself apart.

* Nitpicker corner: I said many, not all. Don't bother to list me software process that deals with it. I had a reason to explicitly list the processes that I did.

Friday, July 23, 2010

Javascript trick to quickly navigate views.

Siebel’s hierarchical way of organizing views under sitemap is a useful feature, everything is arranged under screens as per context. But sometimes the only way to access a certain view is to go via the sitemap, and specially for administrators, this means sifting through pages of links to find the correct view. Being an Seibel EAI integration developer, I find myself frequently visiting three main views in Siebel application, on my local as well as the thin client. Most of my work can be done on tools, but Datamapper Administration View, Workflow process instance and webservice administration views are my mostly visited views.  Inorder to make navigation easier, I used to create bookmarks in the browser using the GotoView command to quickly take me to a view.



But this meant and different set of bookmarks for local client, and a different set for every server/environment. So I ended up using javascript to do the navigation for me.


Create a bookmark in your internet explorer, then enter the text below for Business Service Simulator view


javascript:var url = new String(window.location);var p = url.substring('start.swe?') ,url.length);p=p + 'start.swe?SWECmd=GotoView&SWEView=Business Service Test View' ;window.location(p);


Once you have logged into your siebel application (local/server), just select this bookmark , and you should be taken to that view.


Here is the bookmark for Datamapper Adminstration View


javascript:var url = new String(window.location);var p = url.substring('start.swe?') ,url.length);p=p + 'start.swe?SWECmd=GotoView&SWEView=EAI DTE Data Map Editor' ;window.location(p);



Code junkies can edit the code and add any view wanted.


Monday, July 19, 2010

Just discovered John Mayer

Just discovered John Mayer. Nice soft care free style.

Monday, July 12, 2010

Toy Story 3 - best movie of this year.

We have been through half of this year, and the only movie which stands out clearly is Toy Story 3. I have always loved animation. I love the way animators are able to make people laugh. But I am even more intrigued when they are able to turn people emotional, or even make us cry.  The first time I saw this was in Disney Pixar’s Up! There is a complete sequence of storytelling with moving images and soothing background music, but not even a single bit of dialogue, but it turns the viewer emotional. The only that stopped me from sobbing was that I was watching it at home, and not in the theatre.  And now , I had a similar feeling of nostalgia when I watched Toy Story, specially the ending. Again, there is no piece of dialogue meant for the viewer, it’s just animated sequences of last looks, and silent goodbyes. And music. Beautiful music.


At home, I still have some of my old toys, salvaged and preserved in the drawing room show case. It’s a bit of younger me. And old monkey my dad brought from Germany. A toy dog which still barks if batteries are inserted. The movie made me think, how did I feel when I outgrew my toys..sigh, don’t remember.



If you haven’t watched Toy Story 3- you are missing the best movie of this year……at least, so far.

Thursday, June 24, 2010

sql update insert space before caps and capitalize first charachter

I was loading up EAILoolup Values for use in a interation module I had. The external values where strings without spaces and first letter of each word was capitalized.


Like this




Its equivalent Siebel value to be show was


Reopened On Client Request.


So I figured all I needed was an SQL update statement to replace a space before each capitalized letter and make the first letter caps…


select initcap(regexp_replace(EXT_VALUE,'([^^])([A-Z])','\1 \2')) from S_EAI_LOOKUPMAP where LOOKUP_TYPE = 'ClaimStatus’


This did the trick !

4th Generation CRM system

I found this page on the net….it attempts to define a 4th generation CRM system.

Today's Customer Relationship Management (CRM) System

It's impossible to state precisely what customer relationship management (CRM) means to everyone. The term has been applied to almost every element of business that even remotely interacts with a customer.

In its infancy, CRM systems were a series of mainframe or server-based applications specific to sales, marketing and support business functions.

The applications were lightweight by today's standards and did little more than capture and file critical data. But as cultural boundaries within organizations weakened, individual collections of information gave way to sophisticated applications that could span business functions. By doing so, these applications created the vision of a single view of the customer.

For the first time, organizations could track and analyze shifting customer needs, link marketing campaigns to sales results, and monitor sales activities for improved forecasting accuracy and manufacturing demand.

CRM's Evolution
CRM has evolved since its earliest incarnation, originally driven by an inside-out focus, through three phases of evolution: technology, integration and process. Recently have we seen a major leap forward to a fourth phase: customer-driven CRM —an outside-in approach that has intriguing financial promise.

1. Technology: In its earliest incarnation, CRM meant applying automation to existing sales, marketing, support and channel processes as organizations attempted to improve communications, planning, opportunity and campaign management, forecasting, problem solving, and to share best practices. To some degree, it worked. However, automating poorly performing activities or processes rarely improves the quality of the outcome. So, for the most part, the quality of the return on investment (ROI) was meager—if measurable at all. The promise of the technology was there, but few organizations were realizing the pinnacle of performance. The metric of success was increased efficiency in sales, marketing, support and channel processes.

2. Integration: By developing cross-functional integration, supported by data warehousing and shared roles and responsibilities, organizations began to create a customized view of the customer. Support issues, Web hits, sales calls and marketing inquiries started building a deeper understanding of each customer and allowed aggressive organizations to adapt their tactics to fit individual needs. Integration focused around two primary components:

    • Make it easier to do business with the seller: Expected benefits are to improve customer retention and lower support costs.
    • Predictive modeling: Data mining of an aggregate of corporate knowledge and the customer contact experience was used to improve operational and sales performance. By applying complex algorithms to a history of purchasing or inquiry characteristics, it became practical to predict the demands of individual customers. Up-selling, cross-selling, even the ability to preempt potential problems, was now possible for all customer-facing representatives. Expected benefits are to have better cross-selling/up-selling and improved product offerings or delivery.

3. Process: By rethinking the quality and effectiveness of customer-related processes, many organizations began to eliminate unnecessary activities, improve outdated processes, and redesign activities that had failed to deliver the desired outcomes. Then, by re-creating the process through an understanding of the capabilities of the technology, the outcomes were more predictable and the promises for a meaningful ROI more substantial and realistic. The metric of success became the improved effectiveness in serving the customer.

Thus far, almost everything about CRM has focused on improving the effectiveness and efficiency of the seller's organization.

Organizations have evolved from sales representatives working from paper notebooks, or a card system, to a tightly integrated network that sees movement in sales activity, predicts product demand on manufacturing, and manages the logistics of complex teams to serve the buyer and seller.

Marketing, support services, channel management, revenue management, resource allocation/management, forecasting, manufacturing, logistics and even research and development have all seen the benefits of a well-designed CRM strategy.

However, the past decade of CRM and its associated improvements have been based on three assumptions:

1. The past would be a logical foundation to predict future customer needs and profitability.

2. Demand for traditional value propositions would remain constant.

3. Better customer relationships would deter attrition.

All three of these assumptions have failed—or at least become unstable—in a post-September 11 environment.

Today we know that:

1. Historical purchases or inquiries are not a clear indication of future needs as buyers are rapidly redefining requirements to satisfy their current business, market or shareholder demands.

2. Value propositions are changing in highly competitive markets as sellers are working aggressively to reestablish structural bonds.

3. Driven by sensitive financial markets, buyers move to whichever supplier can provide the best aligned, most cost effective solution that promises to stabilize, or improve, their business performance.

These factors are driving CRM into a fourth phase.

Customer-Driven CRM—The Fourth Phase
Today, revenue performance has become the central theme for CRM as organizations seek to achieve and maintain expected financial results. Leading executives are asking:

  • Which of my customers have the potential for a high-profit, sustainable relationship?
  • What defines profitable and unprofitable customer segments?
  • What must change to realize that optimal potential?
  • Where's my opportunity for growth?
  • Where's my risk for loss?
  • Am I making the right decisions related to balancing acquisition, cross-selling and upselling—and for the right customer groups?

The epiphany isn't in the questions themselves, but in the fact that we're asking them after a decade of CRM investments—investments intended to provide just those very answers.

It is important to understand that a disruptive change has occurred causing large segments of customer organizations to reassess many of their basic needs, values and assumptions.

Research indicates that this event was triggered by the uncertain complexities of the post-September 11th world. Organizations are now challenging everything from how they create value, to how they serve their markets, to how they meet shareholder expectations. It is the answers to these questions that create the framework for phase four CRM.

Without a deep understanding of what's going on in the customer's head—specifically what will influence buying behavior—it is difficult to establish customer strategies that mutually serve the needs and expectations of the buyer and seller communities.

Understanding the Difference

In the past, CRM has followed a basic balanced scorecard technique involving four categories: customer, financial, operations, and people.

From an inside-out perspective, organizations first analyzed the needs and capabilities of operations and their people to determine what could be delivered to the customer.

From that, they drew conclusions and predictions to determine the impact on the financial category.

As this has changed, so have the priorities. Now the focus is first on the customers:

  • What will they buy, when, why and for how much?
  • What creates value for them, and does this create a structural bond?
  • What services can we perform that merit premium margins?
  • Can we establish a new market segmentation strategy focused on potential profitability and willingness to purchase?
  • Do we understand their business drivers, financial metrics, buying process and decision criteria?

Customer driven CRM means that organizations first understand the customer, then move inward to operations.

Within the context of the customer, the systems and infrastructure capabilities needed to serve those customers and segmentation-based requirements must be reassessed.

Next, it's imperative to explore the skills and competency requirements for the people component of the CRM design.

A decade of CRM has taught us that nothing happens until your people interact with the customer in a manner consistent with new CRM customer strategies and systems.

And, finally, you should be well positioned to apply predictive modeling algorithms to establish a financial model with exceptional accuracy.

Not an easy task, but case studies are proving financial predictions that can demonstrate account-level forecasting with over 80 percent accuracy.


Developing a CRM strategy isn't an easy task. Complex organizational design, comprehensive technologies and ever-changing customer demands are just the beginning. The lessons learned are monumental but we know that the promises of customer driven CRM are worth the journey.

Here's a simple framework for fourth-generation CRM:

  • Focus on financial results: Learn how to identify existing profitable customer segments and determine what will establish a profit-based profile for moving forward. Then develop the business requirements to support sustained, and structurally bonded, relationships.

  • Find cost effective alternatives for nonbuyers or low-margin customers: Not all customer relationships are profitable and very few companies can afford to pay to deliver an equal level of services. Control costs and save your best resources for premium accounts—while working to bring low performers into an acceptable profit portfolio.



Siebel IP 18 - 2018 SOD

Oracle is going ahead and staying with Siebel. For now. They have just released the statement of direction for IP 2018. At a high level th...


On Stackoverflow

You might also like...