Here’s wishing you a Merry Christmas.
Its amazing what you can find on the net ! The other day I was searching for piece of Siebel code for a tricky requirement, and I found the answer in the most unlikely of places: Google Code. Somebody had developed an application and shared it there for free.
Try searching on open source project sharing sites for some cool goodies:
Please do check the licensing agreements before using them in your production code though.
Like: The Siebel Power Tools is a really neat utility developed in AutoHotkey to simplify development.
The “Inbound E-mail Database Operations” vanilla business service is a favorite with Siebel developers, it is used when Siebel’s rules regarding business objects comes in the way of your actual business requirement. It can be used to modify records into any Business Component under a business object different from your workflow’s BO. But I recently found an issue in its working when there are bounded picklists involved. Usually when one tries to set a value to a picklist field, and the picklist is configured as bounded, Siebel throws up a validation error saying the value cannot be found in the bounded picklist….. In the case of Inbound E-mail BS, the error is not thrown ie, an exception is not caused. Try this out on your Siebel installation.
The business component Action has a field “Type”, which has a predefault and a bounded picklist.
The picklist is bounded
Now I use the Business Service Simulator view to use the Inbound E-Mail BS’s InsertRecord method to insert an acitivity record.
I have set a incorrect value for all three picklist fields, the values are simply not present in the vanilla LOV system. When the simulation is run, we expect Siebel to throw up a picklist validation exception. Instead, we get a success message and an Activity is created.
Instead of taking the wrong value we provided, Siebel has taken the predefault value directly. If the input value was a valid one, Siebel creates the record correctly.
We had an automation workflow which received inputs from Inbound XML to create an activity, and when the values in the incoming XML were wrong, the activities were still getting created without validation errors. The solution we implemented was to add a validation step in the workflow to ensure the records had correct, validated values.
If you are using this BS in your project, do check if the possibility of this error occuring in your business flow.
One issue I have faced numerous times with Siebel’s browser scripts is that the ‘this’ reference is not recognized when invoked in a separate function.
This code works fine when written directly in the PreInvoke section of the applet:
function Applet_PreInvokeMethod(name, inputPropSet)
{
if (name == "CustomMethod")
{
alert(this.Name());
return ("CancelOperation");
}
return ("ContinueOperation");
}
But if you decide to move the code into a separate function of its own:
function Demo()
{
alert(this.Name());
}
function Applet_PreInvokeMethod(name, inputPropSet)
{
if (name == "CustomMethod")
{
Demo();
return ("CancelOperation");
}
return ("ContinueOperation");
}
..the system will start giving errors saying method not defined. This really gets in the way when there is not of field access and manipulation required in the function. One way out is to pass the this reference directly as an argument into the function.
function Demo(applet)
{
alert(applet.Name());
}
function Applet_PreInvokeMethod(name, inputPropSet)
{
if (name == "CustomMethod")
{
Demo(this);
return ("CancelOperation");
}
return ("ContinueOperation");
}
Another way I have seen recently is to use a global variable for the applet and use that instead of the this. The variable has to be initialized in Applet_Load event
declarations()
{
var g_oApplet;
}
function Applet_Load()
{
g_oApplet = this;
}
function Demo()
{
alert(g_oApplet.Name());
}
function Applet_PreInvokeMethod(name, inputPropSet)
{
if (name == "CustomMethod")
{
Demo();
return ("CancelOperation");
}
return ("ContinueOperation");
}
Currently I am working on a Siebel Financial Applications project using a lot of Business Rules Processor (BRPs) . The BRP way of working with Business Components is by using the various methods available under the FINS CAP Buscomp Handler Business Service.
FINS CAP Buscomp Handler Business Service provides the following five methods:
The BS works without a Business Object context, ie while specifying the Business Component on which to operate, the Business Object name is not provided. It is the only BS I know in Siebel which operates directly on business components without taking the BO context. But as we realized, this is not always the best way of operating. As the amount of data grew, we found the BRPs going slower and performance degradation.
On spooling out the SQL when the BRPs were running, we found that null queries being run in the tables, without a search criteria. When the InsertRecord method of the BS is used to insert a record into say..Opportunity BC which is based on S_OPTY table, the BS was running this query first.
SELECT
T1.CONFLICT_ID,
T1.LAST_UPD,
T1.CREATED,
T1.LAST_UPD_BY,
T1.CREATED_BY,
T1.MODIFICATION_NUM,
T1.ROW_ID,
T14.USAGE,
T8.TRDIN_EXPIRE_DAYS,
T7.NAME,
T1.PR_DEPT_OU_ID,
T7.INTEGRATION_ID,
T7.LOC,
T7.OU_NUM,
T10.NAME,
T7.CURR_PRI_LST_ID,
T7.PR_BL_ADDR_ID,
T7.PR_BL_PER_ID,
T7.PR_SHIP_ADDR_ID,
T7.PR_SHIP_PER_ID,
T1.CONSUMER_OPTY_FLG,
T13.BL_ACCNT_ID,
T13.BL_CON_ID,
T1.CHANNEL_TYPE_CD,
T1.CURCY_CD,
T1.CUST_ACCNT_ID,
T14.PROJ_STAT_CD,
T1.CLOSED_FLG,
T13.GROUP_TYPE_CD,
T13.DEPARTURE_DT,
T13.ARRIVAL_DT,
T4.STATUS_INBND_CD,
T7.ROW_ID,
T1.PR_CON_ID,
T1.NAME,
T1.NEW_LOAN_FLG,
T13.OPTY_MARKET_CD,
T12.STAGE_STATUS_CD,
T13.OPTY_SEGMENT_CD,
T4.STATUS_CD,
T1.APPL_OWNER_TYPE_CD,
T1.PAR_OPTY_ID,
T5.NAME,
T9.PAR_POSTN_ID,
T5.PROJ_PRPTY_ID,
T1.ALIAS_NAME,
T1.PR_OU_INDUST_ID,
T1.PR_OU_ADDR_ID,
T1.PR_REP_DNRM_FLG,
T1.PR_REP_MANL_FLG,
T1.PR_REP_SYS_FLG,
T1.PR_CMPT_OU_ID,
T6.COUNTRY,
T9.PR_EMP_ID,
T1.PR_OPTYORG_ID,
T1.PR_OPTYPRD_ID,
T1.BU_ID,
T1.PR_PRTNR_ID,
T1.PR_POSTN_ID,
T1.SUM_REVN_AMT,
T1.SUM_CLASS_CD,
T1.SUM_EFFECTIVE_DT,
T1.SUM_COMMIT_FLG,
T1.SUM_COST_AMT,
T1.SUM_DOWNSIDE_AMT,
T1.SUM_REVN_ITEM_ID,
T1.SUM_MARGIN_AMT,
T1.SUM_TYPE_CD,
T1.SUM_UPSIDE_AMT,
T1.SUM_WIN_PROB,
T11.LOGIN,
T1.PR_SRC_ID,
T6.STATE,
T1.PR_TERR_ID,
T1.PROG_NAME,
T1.PROJ_PRPTY_ID,
T13.REL_TYPE_CD,
T1.SALES_METHOD_ID,
T12.NAME,
T1.STG_START_DT,
T1.CURR_STG_ID,
T12.STG_ORDER,
T1.SECURE_FLG,
T1.OPTY_CD,
T1.PGROUP_PUBLIC_FLG,
T1.BU_ID,
T2.FCST_CLS_DT,
T2.FCST_REVN_CURCY_CD,
T16.LOGIN,
T17.EFFECTIVE_DT,
T17.COST_AMT,
T17.DOWNSIDE_AMT,
T17.MARGIN_AMT,
T17.WIN_PROB,
T17.REVN_AMT,
T17.ACCNT_ID,
T17.CLASS_CD,
T17.REVN_AMT_CURCY_CD,
T17.QTY,
T17.CRDT_POSTN_ID,
T17.TYPE_CD,
T17.UPSIDE_AMT,
T19.FST_NAME,
T19.LAST_NAME,
T20.SRC_CD,
T13.ROW_ID,
T13.PAR_ROW_ID,
T13.MODIFICATION_NUM,
T13.CREATED_BY,
T13.LAST_UPD_BY,
T13.CREATED,
T13.LAST_UPD,
T13.CONFLICT_ID,
T13.PAR_ROW_ID,
T14.ROW_ID,
T14.PAR_ROW_ID,
T14.MODIFICATION_NUM,
T14.CREATED_BY,
T14.LAST_UPD_BY,
T14.CREATED,
T14.LAST_UPD,
T14.CONFLICT_ID,
T14.PAR_ROW_ID,
T2.ROW_ID,
T3.ROW_ID,
T17.ROW_ID,
T18.ROW_ID,
T20.ROW_ID
FROM
SIEBEL.S_OPTY T1
INNER JOIN SIEBEL.S_OPTY_POSTN T2 ON T1.PR_POSTN_ID = T2.POSITION_ID AND T1.ROW_ID = T2.OPTY_ID
INNER JOIN SIEBEL.S_PARTY T3 ON T2.POSITION_ID = T3.ROW_ID
LEFT OUTER JOIN SIEBEL.S_SYS_KEYMAP T4 ON T1.ROW_ID = T4.SIEBEL_SYS_KEY
LEFT OUTER JOIN SIEBEL.S_OPTY T5 ON T1.PAR_OPTY_ID = T5.ROW_ID
LEFT OUTER JOIN SIEBEL.S_ADDR_PER T6 ON T1.PR_OU_ADDR_ID = T6.ROW_ID
LEFT OUTER JOIN SIEBEL.S_ORG_EXT T7 ON T1.PR_DEPT_OU_ID = T7.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_ORG_EXT_ATX T8 ON T1.BU_ID = T8.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_POSTN T9 ON T1.PR_POSTN_ID = T9.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_PRI_LST T10 ON T7.CURR_PRI_LST_ID = T10.ROW_ID
LEFT OUTER JOIN SIEBEL.S_USER T11 ON T9.PR_EMP_ID = T11.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_STG T12 ON T1.CURR_STG_ID = T12.ROW_ID
LEFT OUTER JOIN SIEBEL.S_OPTY_TNTX T13 ON T1.ROW_ID = T13.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_OPTY_DSGN_REG T14 ON T1.ROW_ID = T14.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_POSTN T15 ON T2.POSITION_ID = T15.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_USER T16 ON T15.PR_EMP_ID = T16.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_REVN T17 ON T1.SUM_REVN_ITEM_ID = T17.ROW_ID
LEFT OUTER JOIN SIEBEL.S_PARTY T18 ON T1.PR_CON_ID = T18.ROW_ID
LEFT OUTER JOIN SIEBEL.S_CONTACT T19 ON T1.PR_CON_ID = T19.PAR_ROW_ID
LEFT OUTER JOIN SIEBEL.S_SRC T20 ON T1.PR_SRC_ID = T20.ROW_ID
ORDER BY
T17.EFFECTIVE_DT DESC.
If you analyze the last part of the SQL, there is no “WHERE” clause with a search specification , nor are there and bind variables. This query will simply return all records present in the Opportunity Business Component. That’s right, its an empty query on the S_OPTY table and all other tables left joined. And this is fired every time the InsertRecord method is fired. I think Siebel tries to see if the record going to be inserted is not a duplicate of an existing record in the system, and it does this by first firing and empty query and then comparing the result with what we are trying to insert. But as the number of records in the tables grow, the performance degrades. And this is a vanilla/OOTB business service.
Anyway, we had to do away with “FINS CAP Buscomp Handler:InsertRecord” and replaced it with another vanilla BS: “Inbound E-mail Database Operations” Business Service and its “InsertRecord” method. The syntax is not exactly same, some modifications are required. But after using this BS, we found a tremendous improvement in speed in the system.
Lesson: Don’t worry about the schema.This fits with a piece I read the other day about how MongoDB has high adoption for small projects because it lets you just start storing things, without worrying about what the schema or indexes need to be. Reddit’s approach lets them easily add more data to existing objects, without the pain of schema updates or database pivots.
[Reddit] used to spend a lot of time worrying about the database, keeping everthing nice and normalized. You shouldn’t have to worry about the database. Schema updates are very slow when you get bigger. Adding a column to 10 million rows takes locks and doesn’t work. They used replication for backup and for scaling. Schema updates and maintaining replication is a pain. They would have to restart replication and could go a day without backups. Deployments are a pain because you have to orchestrate how new software and new database upgrades happen together.
Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. Easier for development, deployment, maintenance.
The price is you can’t use cool relational features. There are no joins in the database and you must manually enforce consistency. No joins means it’s really easy to distribute data to different machines. You don’t have to worry about foreign keys are doing joins or how to split the data up. Worked out really well. Worries of using a relational database are a thing of the past.
val1
val2
val3
The searchspec you need to build is : “val OR val2 OR val3” . Well, that’s all that Orit does. Frankly, I don’t know who developed this utility, but it is something developed by a Siebel developer. All it does is simply concatenate column wise data with ‘OR’ in between.
Quick question; will the following code snippet work ?
Business Service : BS1, contains only this code
function function1 ()
{
TheApplication().RaiseErrorText("function1 triggered");
}
There is no code in any other event/function of this BS. And now, the attempt is to trigger this BS via the following code:
var bs = TheApplication().GetService("BS1");
bs.function1();
Now there is something wrong about the second code snippet, right ? This is not the usual way to invoke a Business Service Method. The practise is to use the InvokeMethod command, passing property sets for input and output. But here is the output of running these in Siebel 8
This is an example of Script Libraries feature from Siebel 8 onwards. Developers can write multiple functions in business services, and then these functions get exposed , and the functions can be invoked directly as you would do on C/C++/Java. There is no need of adding code in Service_PreInvokeMethod event to expose the functions.
There are limitations though, such a business service’s functions can be invoked only via scripting. They cannot be used in WFs or BRPs. But if your functionality calls for lots of scripting, this feature surely comes in handy.
The ever friendly Oli has been posting some really tricky pieces of code for his code challenges. Head over there to learn scripting mistakes that creep up in code.
Happy Scripting !
Dynamic toggle applets were probably the first piece of automation a Siebel developer gets to work on; switch the applet depending on some field value. No scripting, nothing at the BusComp level, just Applet toggles. But issues crop in when the logged in user tries to create new record on the applet. Sometimes the view jumps or refreshes, specially when there are lots of applets stacked in the view. The user has to manually scroll down back to this target applet.
Oracle has a work around for this ‘defect’ documented here [ID 541100.1] , which involves loads of scripting at the applet and BusComp level. But I tried to come up with something with fewer lines of code.
Resulting solution: add the following code in the WebApplet_PreInvokeMethod section of the base applet as well as its toggle applets:
if (MethodName == "NewRecord")
{
this.BusComp().NewRecord(NewBefore);
return (CancelOperation);
}
Yep, I know, the code does not make sense at all. But for some reason, it works ! The view does not jump and the new record gets created right there in the applet. At at just 3 lines of code, it beats oracles long and elaborate code version.
with(secondbc){ClearToQuery()ExecuteQuery();}
secondbc.ClearToQuery()secondbc.ExecuteQuery();
Happy New year ,everyone. Yeah, I know, this post is long overdue. I changed jobs some time last year, and the work at the new place is not exactly what I expected. Crazy deadlines, unrealistic requirements, last minute changes…the works.
But I did get to learn more about this whole CRM world.. And here’s hoping I find more time to share more of what I learn.
I began my career on Siebel 5 years ago, and it has been my bread and butter. The tried and tested On-Premise mode of CRM installation has always been popular with the blue chip and Fortune 500 clients I had the opportunity to work for. Although cloud based applications are gaining foothold, most of my employer’s clients steered away from sharing mission critical data on the web. They seem to feel more comfortable maintaining and storing their customers data in company’s storage rooms. A lot of them have have simply said no to SalesForce CRM because they don”t get to secure their customer’s data. But all that is changing.
SalesForce.com has understood this customer concern, and the have decided to do something about it. This year, they will introduce a new feature called Data Residency Option or DRO. Simply put, DRO will enable On Premise storage of mission critical data on Cloud.com servers, which can be setup inside client office locations.
DRO will be a part of database.com - a cloud database Salesforce made generally available. It gives an option to the customers of Salesforce to store their mission critical data at their own location and hence keeping complete control of the inward and outward flow of the data across the customer firewall.
The technique developed by Navajo, also called Virtual Private Saas, provides the cloud vendor, Salesforce.com in this case, a key that enables it to translate the encrypted data as it passes through its cloud application. The data is then re-encrypted as it leaves the cloud vendor's solution and returns to the customer's data source. The corporate data is unreadable on cloud provider's servers during this entire operation. VPS is available both as a cloud service, as well as an appliance sitting on the customer's local or Wide area network. With VPS, the customer is solely responsible for its data security as it will hold all the encryption keys.
The flip side to using such a technique would be the security of the encryption and decryption keys used for the process. It is highly critical to properly manage the keys as once the key is lost, the encrypted data can no longer be accessed. Hence, this calls for robust key management to avoid any such eventuality.
But, barring the above, In my view, this technique will overcome the most important impediment to cloud adoption and will be a foundation of technological acceptance as it addresses the key customer fear i.e. about potential data threats in the cloud.
Coming to the acquisition, Navajo systems, founded in 2009 was one of the existing encryption service providers for Salesforce. Salesforce decision to acquire Navajo hence made a lot of sense when other cloud based CRM tools such as Sugar CRM already has possible options for deployment on public clouds (Amazon EC2, Rackspace etc.), private clouds such as VMWare and also on-site behind customer firewalls.
According to a recent report from IBIS World, one of the world's largest independent publishers of U.S. industry research, CRM industry today stands at 60% on-premise deployments and 40% cloud based solutions (1). For customers who are looking for new purchases or upgrade of their legacy applications, DRO might just be the key decision influencer. Let's wait and watch!!
References: