1
Sep

Build and Deploy OSB Projects using Jenkins

I have been working on Jenkins for a while now. Primarily, my focus is on automated deployment of Oracle Fusion applications using Jenkins. I started with ADF deployments which includes following steps:

  1. Check-out the code from Subversion (SVN)
  2. Build the EAR file
  3. Run SonarQube rules for Code Analysis
  4. Get approval from development lead using Build Promotion Plugin
  5. Application deployment
  6. Send status email to initiator

To extend this workflow, I have setup OSB deployments in a similar fashion. There are only two differences in this process:

  1. Create JAR file instead of EAR
  2. Apply customization file after every deployment

Below is the list of tools/technologies and Jenkins plugins used in this process.

  1. Jenkins
    1. Parametrized build plugin
    2. Build promotion plugin
    3. Build Pipeline plugin
  2. SVN – for version control
  3. configJar – OSB bundled tool to create JAR files
  4. WLST – Scripts to import and deploy JAR files followed by applying customization file

Here is the step-by-step guide of how to setup Jenkins for OSB projects:

Step 1: Since I have many OSB interfaces, I have setup one job per environment instead of creating one job per interfaces. That helps me in effective management of Jenkins’ jobs. Something like this:

  1. OSBBuild_Development – For Dev environment
  2. OSBBuild_Integration – For Integration / Test environment, and so on.

Step 2: Each job will prompt for two parameters – OSB project name and environment where it needs to be deployed.

    

Step 3: These parameters will be passed on to next job which is to get approval from leads. Leads will receive an email with promotion link

Step 4: Once promoted/approved, an email will be received by deployment team. That email will also contain a link which can be used for deployment.

Step 5: Deployment job is divided into following parts:

  1. Checkout code from SVN
  2. Build and Create JAR with same name as provided in the Step 2.
  3. Invoke WLST script and deploy the JAR on environment selected in Step 2.

 

This concludes the high-level details of automation process of OSB deployment using Jenkins along with integrated approval process. If you have any query, please contact me.

18
Oct

Getting Started with Big Data Fundamentals

All of us must have heard these two words nowadays, i.e. Big Data. What is Big Data? Is it a database? Is it storage? There are many people who may not understand the true meaning of Big Data OR who might be assuming it in wrong context. This blog post (and upcoming posts on same topic) is an effort to explain Big Data in simple words. So, let’s begin.

Data Set

What is Big Data?

Does Big Data means data which is big in size? Well, that’s how the name sounds. The term “Big Data” doesn’t actually refer to the size of the data sets, but to the solutions used to extract volumes from the data sets – solutions involving new architectures and technologies.

It does not matter whether the size of data is big or small, these new methods are applicable on every data set. Even if you have small data set, you can use these solutions to manage the data in a better way and extract useful information out of that.

Now the next question is – when should we start using these solutions to get the best out of them?

Well, before I explain the actual factors to be considered, I must say that you should NOT consider “size” as the primary factor for opting big data solutions. The three considerations of Big Data, which is also called concept of three Vs, are:

  1. Volume
  2. Variety, and
  3. Velocity

Volume describes the size, that is, the amount of data generated.

Variety refers to the actual contents of the data set. There could be multiple sources for the data sets and all these sources might be using different formats. That brings variety in the data sets.

Velocity is the frequency at which data is generated, captured and made available for users or other systems for consumption. This is the key factor nowadays which play an important role for opting Big Data solutions. It also lead to evolution of new frameworks, like Apache Spark, Amazon Kinesis, etc.

Continue reading

10
Oct

Using AWS Lambda Function to Create AMI at Runtime

AWS Lambda is a very powerful service. You can write numerous Lambda functions to cater your requirement. This articles explains a real world scenario which uses Lambda function to achieve the intended result.

The Infrastructure in place:

Before I dig deep into Lambda function, let’s understand the existing AWS infra and the requirements. I had a very simple environment with AutoScaling groups configured to take care of Web Servers running on Windows environment. Our SQL Server was running on RDS which we migrated to EC2 recently ( a different story altogether ). Our code was hosted on BitBucket and automatic deployments were configured using AWS CodeDeploy.

The Requirements: 

Our basic requirement was to create a new AMI every time a successful deployment is done. We were also looking to update our AutoScaling group with the new AMI. 

The Solution: 

Here begins the fun part. Let’s start with BitBucket and move with the flow. Once the code is ready to be deployed, go to BitBucket and click Deploy To AWS button. More on configuring BitBucket with AWS CodeDeploy can be found here.

That will pass the control to AWS CodeDeploy which will make the deployment on all the instances currently running as part of Auto Scaling group. Now is the time when you have to think about create an AMI and update Auto Scaling group. To achieve this, I have used Amazon SNS service. After every successful deployment, CodeDeploy service will send a message to SNS which in turn trigger AWS Lambda function and this function will take care of all the requirements we had.

Below is the detailed explanation about this function. Continue reading