Tuesday, October 21, 2014

Bare bones Credit Card processing workflow



Some of my recent work deals with accepting credit card and other form of cards and settling them.

I have had some challenges to understand some basic terminology itself so I thought that it would be good to share my summary with others, in case they need it.

So, here it is...

There are quite a few key players


  1. Card Holder - person who has the card, say, VISA.
  2. Issuing Bank - Bank which issued the card (VISA branded card) to me. Say, Bank of America
  3. Merchant - let's say i want to buy something at www.amazon.com
  4. Acquiring Bank - Amazon.com may have an account with some bank like Wells Fargo and as and when amazon gets the money paid by customers, it is deposited into its account with Wells Fargo bank.
  5. Payment Processor - It can be someone like Chase Payment tech which accepts online payment requests from Amazon.com
  6. Card Network - VISA/MasterCard/Amex have their own electronic network to accept the payment request and they route the request to Issuing Bank, for authorization.
  7. Payment Service Provider - These are online players like Paypal and Amazon Payments who certify themselves as Merchants with Card Networks and act on behalf of small merchants. (sub_merchants)
  8. Sub-Merchant - Small merchants who may not have the muscle/resources to go thru lengthy  + costly certification process with card networks and banks.
  9. Payment Gateway - CCBill or other online services which deal with processors and banks.

There are 2 basic workflows

  1. Authorize - when card needs to be authorized for the amount of sale requested
  2. Settlement - When the amount of sale transaction  is deducted from Customer's account and deposited into Merchant's account 
Please note that these are simplified descriptions for ease of understanding.

Overall workflow can be visualized as depicted here and here


Monday, September 09, 2013

Browser vs Layout Engine



Ever since I started working on a Front End Heavy Project which involves significant bit of HTML5 and CSS3 as well as elements of responsive design, I have heard terms like WebKit, Gecko, Browser Engine, JS Engine,Rendering Engine and so on.

Some of these terms are clear to me but some are not. So, I decided to do some digging into this area to self educate myself.

Findings are good enough to share with wider audience in the hope of being helpful to someone in future.
  • What is WebKit?  
WebKit is an open source rendering engine which parses HTML, CSS and JavaScript and renders the web page in a browser.

Standard Components of WebKit are WebComponents, JSCore (used for JS parsing and execution) as well as platform specific stack for actually rendering the page.

WebKit Diagram

It is all very well explained in an article here

  • What are the other rendering engines in the wild? 
Well, there are numerous rendering engines (besides webkit)  but most notable ones are Trident (from MicroSoft) and Gecko (from Mozilla)

Conceptually, they are pretty similar to WebKit but actual differentiation lies in the implementation.

  • What is a Browser?
Browser is a software which is used to access resources over the internet (or intranet).

Browsers use rendering engine like WebKit/Gecko to render the page but have additional code for Browser UI as well as dealing with different persistence layer like Cookies/LocalStorage/WebSQL/IndexedDB


Image Source (how browsers work?)

Standard Components of a Browser are:

  • Parsing (HTML, XML, CSS, JavaScript)
  • Layout (common in all WebKit browsers)
  • Text and graphics rendering
  • Image decoding
  • GPU interaction
  • Network access
  • Hardware acceleration

More information can be found here and here

  • Does using WebKit mean that browsers will be compatible?
No, Just because 2 browsers are using WebKit, it does not mean that they will be compatible.
For example, Chrome (upto 27) and Safari are based on WebKit but as we can see from the diagrams above, there are lots of other components which vary from one browser to another browser (for example, Chrome uses V8 JS Engine where as Safari uses different engine) or from one OS to another OS (like Safari on Windows vs Safari on IPad)

It is a complex world out there!

Thursday, May 30, 2013

How does Solr Sort Documents when their Score is Same

We have had cases when same keyword Search gives us results which are ordered randomly.

On digging deeper, it seems that if Score of documents are same then Solr is sorting them based on their internal DocId or the time when the documented was indexed.

To demonstrate this, I used a very simple schema:

Added few documents like this

<doc>
<str name="id">201</str>
<int name="rank_id">20111</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">2022</str>
<int name="rank_id">20111</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">1</str>
<int name="rank_id">20</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">2</str>
<int name="rank_id">21</int>
<str name="name">Perl Harbor</str>
</doc>


When doing a simple query, 


I get following documents which are in the order in which they were added

<doc>
<str name="id">201</str>
<int name="rank_id">20111</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">2022</str>
<int name="rank_id">20111</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">1</str>
<int name="rank_id">20</int>
<str name="name">Perl Harbor</str>
</doc>

<doc>
<str name="id">2</str>
<int name="rank_id">21</int>
<str name="name">Perl Harbor</str>
</doc>


This indicates that order is based on the order in which they were added to Solr.

You can see same question being asked here

Saturday, October 27, 2012

Solr Schema Changes - Breaking Vs Non-Breaking



There are times when your Solr based application needs to be extended, in terms of adding new fields or updating existing fields' definitions or deleting existing fields.

Whenever we run into these scenarios, one of the most important question that needs to be answered is, does this change need existing index to be deleted and recreated or is it as simple as updating schema without any deletion & re-indexing?

If a change needs index to be deleted and all docs to be re-indexed then this is something I call as a breaking change. One such case is when field omitNorms is changed to be true/false. This impacts all the documents and unless all docs are deleted followed by re-indexing, index will still have older information.

Changes like adding a new field or deleting a field is an easy one to deal with. These changes do not require all docs to be deleted. It is handled nicely by Solr. All newly added docs will follow new schema. This is what i refer as non-breaking change.

I hope this clarifies some of questions which people may have about the impact of making changes to Solr Schema.


Friday, May 25, 2012

Creativity, Brainstorming and Work Spaces

This article does a good job, at clearing misconceptions about Brainstorm, Creativity and how it can be influenced by people dynamics as well as the work space they are in.

Following is what I understood:
  • Brainstorming is not as productive as it is made out to be. Most importantly, the notion that during brainstorming session, we should not be critical of the ideas and rather get an inventory of ideas. This is supposed to ensure free flow of ideas. It does not seem to be the case in real world. 

  •  Research is proving that it is much better to let people question the ideas and it seems to be leading to better ideas which are far more original than what brainstorming does.
  •  Article seems to suggest that Unfamiliar perspectives can be thought provoking and can lead to new ideas. Success of broadway musicals where artistes were a mix of people who have worked together and some newbies indicates that it truly is a case.
  •  It is a lesson worth keeping in mind that those teams would be more successful which have a healthy mixture of people who have worked as a team earlier and also to have some new people who have different perspective that rest of the team.
  • To get success as a team, it is important that team is able to meet physically often and this leads to an insight that work spaces play an important role in fostering creative collaboration across different groups.
  • Success of MIT Radar Lab, Building 20 as well as Pixar setup by Steve Jobs proves that workspaces are much more important than we think.

As I work in Software development with different set of people, these lessons are important for me and people like me.

Wednesday, June 29, 2011

Using log4j to get ibatis and SQL logs

# Global logging configuration
log4j.rootLogger=ERROR, stdout

#log4j.logger.com.ibatis=DEBUG

# shows SQL of prepared statements
#log4j.logger.java.sql.Connection=DEBUG

# shows parameters inserted into prepared statements
#log4j.logger.java.sql.PreparedStatement=DEBUG

# shows query results
#log4j.logger.java.sql.ResultSet=DEBUG

#log4j.logger.java.sql.Statement=DEBUG

# Console output
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] - %m%n

Tuesday, May 31, 2011

Anatomy of Spam Business

This paper details the way Spam Business model works.

Paper presents the work in great detail and it is astonishing to know that Spammers have a ecosystem in which they thrive. They setup their stores through affiliate program and host them in bullet-proof hosting servers which are managed by companies which do not abide by request to take down the site.

Sunday, May 29, 2011

Elastic Search - an interesting search solution

ElasticSearch is a open source RESTful Search Server, built on top of Lucene Library.
It boasts of following features



  • JSON over HTTP

  • Free Search Schema

  • Near Realtime search

  • Easy Distributed Index and Search

  • Multi-Tenancy

  • Ready for Cloud - very easy for setting it up in Amazon Cloud

  • JAVA API Support

  • Support for Facets

  • It uses Write Behind Queue to store Index updates. It makes use of TransactionLogs to keep track of Index updates.

  • Reads can be done on Shard Replicas.

However, it does not support XML.


On the surface, it appears that this product is ready for Web 2.0 world and is ideal for cloud deployment.


Its feature set is not as rich as Apache Solr but it does have decent support for Facets which is hot nowadays. It has very good Data Visualization support which makes it ideal for Monitoring Tools.


How does it compare with Solr?



  • Solr is richer in feature set, w.r.t analyzers and facets.

  • Solr's distributed setup is not ideal and looks awkward. ElasticSearch's design seems to be robust.

  • Solr has been there for much longer and has matured community behind it.

  • ElasticSearch is so far only one committer's work.

  • ElasticSearch scores over Solr in terms of Cloud Readyness.

  • XML support is misisng in ElasticSearch which is not a big deal as JSON is standard for Web 2.0 world.

You can get more info from these slides.

When do you use ES?


  • Big Index or Realtime Search is needed

  • or, there are many indexes

  • or, have a multi-tenancy requirement ( Solr core is okay)

When you should not use ES?



  • If team is comfortable with Solr then stick to it

  • justifying ES in a large corp would be difficult

More info can be obtained from here.






Logstash: A Free/Open Source alternative to Splunk

Today I came across a wonderful presentation on logstash, a open source log archiver and analyzer which makes use of ElasticSearch to index and search log data.

What makes it interesting is, it has very good support for collecting events from different sources such as log files, sys logs, sockets as well as MQ. It will let you apply different filters and stores its index in ElasticSearch.

Use of elasticsearch is interesting as it uses JSON to index/read data and provides an easy way to search and visualize log data. ElasticSeach can scale better than Solr and is ready for Cloud

This is a compelling package and offers a credible alternative to Splunk.

logstash project url is this.

Sunday, May 22, 2011

How to log httpclient using log4j and Java Util logging

there are times when httpclient's trace needs to be logged

we can use a log4j file like this

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%c] %m%n

log4j.logger.org.apache.http=DEBUG
log4j.logger.org.apache.http.wire=ERROR


You can pass the path of this config file at command line like this
-Dlog4j.configuration=C:\myworkspaces\Client\src\log4j.properties

if you are using Java util logging then you can use this
-Djava.util.logging.config.file=C:\myworkspaces\Client\src\logging.properties

java util log file needs to be like this

.level = FINEST

handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level = ALL

org.apache.http.level = FINEST
org.apache.http.wire.level = SEVERE

Monday, May 16, 2011

We are in the price rise age

Days of cheap stuff seem to be gone. It looks like price of commodity and everything else is about to go up or is already inching up.

As per, Jeremy Grantham times are changing and we are in for rude shock.

Here is the summary of Grantham’s thoughtful newsletter: (lifted from here)
  • Until about 1800, our species had no safety margin and lived, like other animals, up to the limit of the food supply, ebbing and fl owing in population.
  • From about 1800 on the use of hydrocarbons allowed for an explosion in energy use, in food supply, and, through the creation of surpluses, a dramatic increase in wealth and scientific progress.
  • Since 1800, the population has surged from 800 million to 7 billion, on its way to an estimated 8 billion, at minimum.
  • The rise in population, the ten-fold increase in wealth in developed countries, and the current explosive growth in developing countries have eaten rapidly into our finite resources of hydrocarbons and metals, fertilizer, available land, and water.
  • Now, despite a massive increase in fertilizer use, the growth in crop yields per acre has declined from 3.5% in the 1960s to 1.2% today. There is little productive new land to bring on and, as people get richer, they eat more grain-intensive meat. Because the population continues to grow at over 1%, there is little safety margin.
  • The problems of compounding growth in the face of finite resources are not easily understood by optimistic, short-term-oriented, and relatively innumerate humans (especially the political variety).
  • The fact is that no compound growth is sustainable. If we maintain our desperate focus on growth, we will run out of everything and crash. We must substitute qualitative growth for quantitative growth.
  • But Mrs. Market is helping, and right now she is sending us the Mother of all price signals. The prices of all important commodities except oil declined for 100 years until 2002, by an average of 70%. From 2002 until now, this entire decline was erased by a bigger price surge than occurred during World War II.
  • Statistically, most commodities are now so far away from their former downward trend that it makes it very probable that the old trend has changed – that there is in fact a Paradigm Shift – perhaps the most important economic event since the Industrial Revolution.
  • Climate change is associated with weather instability, but the last year was exceptionally bad. Near term it will surely get less bad.
  • Excellent long-term investment opportunities in resources and resource efficiency are compromised by the high chance of an improvement in weather next year and by the possibility that China may stumble.
  • From now on, price pressure and shortages of resources will be a permanent feature of our lives. This will increasingly slow down the growth rate of the developed and developing world and put a severe burden on poor countries.
  • We all need to develop serious resource plans, particularly energy policies. There is little time to waste.

Saturday, May 07, 2011

Urbanization of India - the road ahead

Recent controversy about Lavasa Project shows that India has to evolve a lot for much needed urban renewal of country.

As India prospers and agricultural sector languishes, farmers and workers are moving to cities to earn their livelihood. This is creating pressure on cities and we need to build new cities. Lavasa can be a good model but for our rotten system.

However, we have to keep working on our system so that new cities can be built.

you can read about lavasa here

Thursday, April 21, 2011

Using Like in ibatis

when we use like operator, we normally need to do something like this

select * from t where t.name like 'saroj%'

we pass % at the end or beginning or both

if we had to do the same thing in IBatis using dynamic inputs then we can do following
select * from t where name like #name#||'%'

if we have a list of dynamic inputs then following would work

select * from t
where

name like #name[]#||'%'


this would generate following SQL:

select * from t where name like 'saroj%' or name like 'name1%' or name like 'name2%'

Thursday, April 07, 2011

How to develop critical thinking at early age

Critical Thinking is a must for a fulfilling life.

Question is: How do we build it? Well, this article gives good tip for building it in early childhood.

Key ideas are

1) Reason with your kid.

2) Classify ideas and share them with kid.

3) Build relationship between objects

4) Ask why, what if and why not?

5) Explore alternative ways

Monday, January 31, 2011

Eclipse Galileo and Helios won't update behind proxy

If you are trying to update Eclipse Galileo (3.5) and Helios (3.6) behind Proxy, you might run into problems like "Site is not found".

So, obvious course of action would be to check Proxy Configuration. Now, you change the settings and ensure that it has right authorization details.

Even after these steps, eclipse would fail to update and error message remains same.

It sounds baffling and it sure is.

After doing some googling for it, I found that Eclipse 3.5 onwards, FileTransferAPI has been changed. In ECF 3.0/Eclipse 3.5 the primary provider is based upon Apache httpclient 3.1. This was introduced in the ECF 3.0/Eclipse 3.5 cycle because the previous provider that was based upon the JRE URLConnection implementation proved insufficiently reliable (i.e. see bug 166179).

Unfortunately, the Apache httpclient implementation, although more robust than the URLConnection-based provider, does not support NTLMv2 proxies directly (for an explanation of why, see here).

For NTLMv2 Proxies, that require username and password for access the workaround is to

1. Disable the ECF httpclient provider.
2. Provide the NTLMv2 proxy authentication info (proxyhost, domain, username, and password)

In ECF 3.0/Galileo both can be done via system properties provided to Eclipse on startup. Here is an example using 'myproxy', 'mydomain', 'myusername', and 'mypassword':

Following settings can be put in eclipse.ini so that Eclipse does not use HTTPClient

-Dorg.eclipse.ecf.provider.filetransfer.excludeContributors=org.eclipse.ecf.provider.filetransfer.httpclient
-Dhttp.proxyPort=8080
-Dhttp.proxyHost=myproxy
-Dhttp.proxyUser=mydomain\myusername
-Dhttp.proxyPassword=mypassword
-Dhttp.nonProxyHosts=localhost|127.0.0.1


once you do this, you should be able to update Eclipse.

References:

  1. ECF Filetransfer Support for NTLMv2 Proxy
  2. Eclipse Bug about this problem

Tuesday, January 25, 2011

Setting up Apache Solr in Eclipse with Tomcat

Prerequisites for this exercise:
• Apache Solr 1.4.1 ( or some other release)
• Eclipse WTP
• Apache Tomcat 6.x

Steps are:

1. Extract Solr-1.4.1 release into C:\ Drive. For my example, directory is C:\apache-solr-1.4.1

2. Launch Eclipse and create a workspace, say solr-workspace

3. Import apache-solr-1.4.1.war file in your workspace by clicking on FIle -> Import,
Select Web --> War as the Import Source.
WAR File: C:\apache-solr-1.4.1\dist\apache-solr-1.4.1.war
Web Project: apache-solr-1.4.1
Click on Finish.






4. Create a folder called solr in the project and create another folder called conf inside solr as shown below:





5. Copy files from C:\apache-solr-1.4.1\client\ruby\solr-ruby\solr\conf into /solr/conf folder.
You can define your schema in schema.xml






6. Create a Tomcat Server using server wizard. Defaults would work fine.




7. Add project apache-solr-1.4.1 to your server as shown below


8. Edit server.xml to let tomcat know where are solr config files are:

<Context docBase="apache-solr-1.4.1" path="/apache-solr-1.4.1"
reloadable="true" source="org.eclipse.jst.j2ee.server:apache-solr-1.4.1">

<Environment name="solr/home" type="java.lang.String"
value="C:\myworkspaces\solr-workspace\apache-solr-1.4.1\solr" override="true"/>

</Context>

9. Start Server and you should see following

INFO: Using JNDI solr.home: C:\myworkspaces\solr-workspace\apache-solr-1.4.1\solr
INFO: Solr home set to 'C:\myworkspaces\solr-workspace\apache-solr-1.4.1\solr\'

Troubleshooting Tips

If your installation is showing errors that solr home is not found then do following
• ensure that server.xml is having right entry
• solr directory exists and has conf as child folder and all the files are there
• clean tomcat-server and clean tomcat work directory

Friday, January 07, 2011

Computer says no

Good read about how messy the computer systems are, at big banks.

As a result banks tend to operate lots of different databases producing conflicting numbers. “The reality was you could never be certain that anything was correct,” says a former executive at Royal Bank of Scotland. Reported numbers for the bank’s exposure were regularly billions of dollars adrift of reality, he reports; finding the source of the error was hard.

Saturday, March 06, 2010

Declarative Programming Vs Imperative Programming

principal distinction between declarative languages and imperative languages is that declarative languages allow the programmer to concentrate on the logic of an algorithm (declarative languages are goal driven, control is not the concern of the programmer), while imperative languages require the programmer to focus on both the logic and control of an algorithm.

Characteristics of imperative languages:

1. Model of computation based on a step by step sequences of commands.
2. Program states exactly how the result is to be obtained.
3. Destructive assignment of variables.
4. Data structures changed by successive destructive assignments.
5. Order of execution is crucial, commands can only be understood in context of previous computation due to side effects.
6. Expressions/definitions cannot be used as values.
7. Control is the responsibility of the programmer.

Characteristics of declarative languages:

1. Model of computation based on a system where relationships are specified directly in terms of the constituents of the input data.
2. Made up of sets of definitions or equations describing relations which specify what is to be computed, not how it is to be computed.
3. Non-destructive assignment of variables.
4. Explicit representations for data structures used.
5. Order of execution does not matter (no side effects).
6. Expressions/definitions can be used as values.

You can get more info here.

Tuesday, August 11, 2009

How to get MQ JMS Calls log

If you want to enable trace log for Websphere Java MQ and JMS Calls then you need to add following properties while starting tomcat/WAS/java program


-Djava.library.path="C:\WAS_MQ_LIB" It should be the path to the directory where mq related jars are available

-DMQJMS_TRACE_LEVEL="base" – it will do max logging

-DMQJMS_TRACE_DIR="c:\LOGS" – path to the directory where log should be written to

Thursday, July 30, 2009

Ibatis error: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException

If IBatis is throwing following exception then apart from usual suspects of correct dtd declaration in sql-map-config xml and sql-map xml files, one should also look at the insert sql definition in sql-map xml.

com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: java.lang.RuntimeException: Error parsing XPath '/sqlMapConfig/sqlMap'. Cause: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: java.lang.RuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException

I had something like this in my insert query xml definition:
INSERT INTO TEST_T (ACCT_ID) VALUES (#acctId)

above mentioned sql statement throws com.ibatis.common.xml.NodeletException which is difficult to figure out.

In my case, error is due to not having closing hash (#) with acctId

If i change the insert statement to following then things work again, notice the closing # in #acctId#

INSERT INTO TEST_T (ACCT_ID) VALUES (#acctId#)

Hope that this is of some use to others who get this error.