casinos

Posts Tagged ‘ITPro’

Getting started with PowerShell and Azure August 21st, 2014

Vinod Kumar

I am a big time supporter of PowerShell and automation when it comes to working with Azure. In the recent past, I have seen SQL Developer ask me – how I can start playing with PowerShell for Azure. Though the concepts of powershell are awesome and I use them inside SQL Server, it is always difficult to get them ready on their machines. Often, the question is – where do I start? This blog will get you started with installing and configuring your machine with PS and Azure.

We can install Azure PowerShell modules by running the Microsoft Web Platform Installer. I have initiated the same and go through the initial wizard.

We can see the installation progress and once done, we will be presented with information to various components that get installed as part of this.

Now to use PowerShell with out Azure subscription, we need to link our account. If you are using an organizational account, then use the Add-AzureAccount command from the Windows Azure PowerShell command. This is useful ONLY when we are using Azure AD for our account. If you are not using Azure AD, then move to the next step.

In most cases, we are going to have standard Live Account associated with our Subscription. In this case, we need to download the certificate. To do this,  invoke the command Get-AzurePublishSettingsFile command from the PowerShell command prompt.

We will be prompted with a save dialog and save this certificate file in a secure location. The next step is to Import the same so that we can start using PowerShell from our local machine to work with our subscription remotely.

For this we will invoke the – Import-AzurePublishSettingsFile “Path of certificate file”

Once this is done, we are connected and we will get an confirmation of our current subscription. being used.

We can quickly get details about our current subscription using the command Get-AzureSubscription as it outlines some of the information.

Now we are all set with are getting started with PowerShell and Azure. In future posts we will use this configuration as starting point to play around creating new VMs, Stopping VMs, Deleting VMs, Creating other services and much more. Stay tuned !!!

Continue reading...


 

Moving data to Cloud–Considerations and thoughts August 20th, 2014

Vinod Kumar

Being a data person and meeting customers almost every single day on considerations of movement to Azure (Cloud), I take the opportunity to demystify a lot basic concepts with customers. Post the session, it is about thinking in the right direction while taking data to the cloud. This blog reflects the lessons learned best practices, approaches and understanding from real world experiences. There are not written on stone sort of recommendations, but these approaches I highly recommend customers to set their minds at while taking data to cloud.

Know your workload

The basic conversation here is to identify if the application is a “Green Field” project or an “Existing Application”. In a “Green Field” project, we will be exposing data via a service for the first time. Like a literal green field, we are dealing with an undeveloped environment and don’t have any expectations, limitations, dependencies or other constraints that would result from having an existing service.

For existing applications, services may reside on our on-premise servers, partner hosted environment or third-party cloud environments. In such scenario, there may be a need to move, scale, or provide a redundant environment for these services. When such applications come, I look at asking if the end-user open on opening up just a subset of of existing data as a new service on cloud.

When working movement to Cloud, I generally ask if the customer wants to migrate as-is their current deployment or migrate data / services as an opportunity to enhance it providing a super set of functionality found in the original application.

If you already have an existing application, but it is reaching the limits of scale and is difficult or impossible due to technology or financial based constraints to scale – one option is to use the cloud as a “scale layer”. In this scenario, cloud infrastructure can be put in front of your existing data services to provide significant scale. This is yet another workload to look at.

Exposing your data

There can be different levels of motivations when exposing your data via API’s. I generally quiz customers around this – is it compliance requirement, is it self-motivated for others to integrate or is it a requirement from consumers. Based on this, the discussion will decide if it is need to have, want to have or nice to have.

Irrespective how the data gets exposed by applications, there are few more analysis we do, these are some typical queries we ask:

  1. Data Location: Are there constraints in location of where data can be stored? Is geography a contraint?
  2. Data type and Format: Are there any standards which mandate data to be transmitted in XML, JASON, Binary, Text, OData, DOM, Zip, Images format etc?
  3. Sharing: Is there a specific API to be implemented like WebAPI, WCF, Web Services, FTP, SAPI etc.
  4. Consumption: Are there requirements to retrieve the data in its entirety or be able to query the data and retrieve only a subset? Is queryability a required and do we need to certify the same?
  5. Commercial Use: It is critical to understand if it is internal systems consuming the data or available for commercial use? Is there a need to monetize the access of data and services delivered?
  6. Data discoverability: Is there a need to expose data and advertised somehow? Should it integrate with gateways, govt services or third party data services like Azure DataMarket?

Know your Data

There are multiple technologies which can be used to host data, each with their own pros, cons, and pricing. Data size is relevant as there can be key factors when determining which technology should be used to host the data. In addition to understanding the current data size, it’s important to also understand the data growth rate. Evaluating the data growth rate can be used to extrapolate storage needs in future years and help scope the technology used.

Know how often the data gets updated. If customer wants to expose data to public, then there maybe sync or out-of-sync updates to source data and data exposed via services.

Putting a copy of your data into the cloud requires the actual transfer of data from your location to the datacenter of cloud hosting provider. These transfers require time and typically have associated bandwidth charges that should be considered – both for the initial transfer of the current state of your data set and the periodic refreshes of data.

Sharding is something that most cloud storage mechanisms support. When looking to move large data sets to the cloud, consider sharding and think about the optimal ways to partition data for optimized for query performance. Examples of different types of sharding include sharding by region, state, by time period (month, year etc.), postal code, customer’s name etc.

Query pattern is yet another dimension we need to keep in mind. Since you pay based on bandwidth utilization, it is critical to understand the query patterns and how queries come to access data. Since we are paying for the data usage, it is important to understand the amount of data we transfer in every request, the number of columns queried etc.

As we build our applications, it is important to understand if there are any Exporting requirements for other systems to consume. In an hybrid scenario, we will need to run queries that result in the export of files to the local file system and then those must be transported and imported into a cloud store. Depending on the cloud technology being used for storage, the import may be trivial or it may require the creation of custom code or scripts to export the data from the source and import it into the cloud.

Are you into data selling business?

There have been very less customers who are in a B2C scenario, but for those who are on this business – need to understand there is new costs to handle while servicing data as a service. It is no more a freebie. Even though cloud computing can greatly reduce costs, there are still costs involved to host, load and support a data service.

As these costs add up, companies look at a minimum of Cost Recovery rather than Profit in the first run. They look at optimizing it later to make profit by volumes rather than profits out of single transactions. I have seen customers adopt different strategies for pricing in a Data as a Service or Software as a Service strategy. There is no one right way or the other -

  1. Tiered Pricing – This is like mobile plans, you charge by standard transactions rates per month, time, data etc.
  2. Pay as you go pricing – this is something to borrow from any cloud vendor charging.
  3. Peak Pricing and Non-Peak Pricing – We can have different pricing for different times of the day.
  4. One time usage Subscription – It is something like a trial costing.

There are many of these mechanisms customers adopt in their strategy, but these are just representative methods one might use.

Conclusion

I seriously hope I have outlined some thoughts in your mind as you migrate your workload to cloud or enable your data on cloud. Cloud is no different than what you do on-premise – just that we need to be double careful and take few extra steps as we enable our customers. In future posts, there will be more thoughts that I will try to bring out.

Continue reading...


 

Basic Column Encryption of data with SQL Server August 6th, 2014

Vinod Kumar

The more I talk with customers on basic architecture, more are the questions about how to implement. These scenarios when talked look great on whiteboard and when they implement and come back (say after 6 months) is completely different. These are some challenges when working with customer developer teams who has just started their career in writing DB level code.

One such scenario I talk to customers who have requirements of security is around column level encryption. It is one of the most simplest implementation and yet difficult to visualize. The scenario is simple where-in most of the HRMS (Hospital Management Systems) come with a simple requirement that data of one person must be masked to other.

So in this post, I thought it will be worth take a step-by-step tutorials to what I am talking. These capabilities are with SQL Server since the 2005 version and can be used by anyone. I am not talking about the infra best practices or the deployment strategy yet, that will be for a future post.

Creating Users for Demo

– Creating the Logins for demo
CREATE LOGIN doctor1 WITH password = ‘MyComplexPass@w0rd’
go
CREATE LOGIN doctor2 WITH password = ‘MyComplexPass@w0rd’
go
CREATE DATABASE hospitaldb
go
USE hospitaldb
go
CREATE USER doctor1
go
CREATE USER doctor2
go

For our example we have two doctors, we want to build a mechanism where Doctor1 patients details must not be visible to Doctor2. Let us create our simple table to keep values.

– Create tables
CREATE TABLE patientdata
(
id         INT,
name       NVARCHAR(30),
doctorname VARCHAR(25),
uid        VARBINARY(1000),
symptom    VARBINARY(4000)
)
go
– Grant access to the table to both doctors
GRANT SELECT, INSERT ON patientdata TO doctor1;
GRANT SELECT, INSERT ON patientdata TO doctor2;

Basic Encryption steps

Next step is to create our keys. To start with, we need to create our Master Key first. Then we will create the Certificates we will use. In this example, I am using a Symmetric key which will be encrypted by the certificates as part of logic.

– Create the Master Key
CREATE master KEY encryption BY password = ‘HospitalDBpwd@123′

CREATE CERTIFICATE doctor1cert AUTHORIZATION doctor1 WITH subject =
‘Doctor1cert’, start_date = ’07/07/2014′ 
GO
CREATE CERTIFICATE doctor2cert AUTHORIZATION doctor2 WITH subject =
‘Doctor2cert’, start_date = ’07/07/2014′ 
GO

CREATE symmetric KEY doctor1key AUTHORIZATION doctor1 WITH algorithm =
triple_des encryption BY certificate doctor1cert 
GO
CREATE symmetric KEY doctor2key AUTHORIZATION doctor2 WITH algorithm =
triple_des encryption BY certificate doctor2cert 
GO 

Let us next look at the Keys we just created using the DMVs.

SELECT *
FROM   sys.symmetric_keys 

A typical output looks like:

Adding Data into table

Next is to simulate as two different users and enter some data into our tables. Let us first impersonate as Doctor1 and enter values, next we will do it for Doctor2.

EXECUTE AS LOGIN = ‘Doctor1′

OPEN SYMMETRIC KEY doctor1key DESCRIPTION BY CERTIFICATE doctor1cert 

– View the list of open keys in the session
SELECT *
FROM   sys.openkeys 

Insert into our table.

INSERT INTO patientdata
VALUES      (1,
‘Jack’,
‘Doctor1′,
Encryptbykey(Key_guid(‘Doctor1Key’), ’1111111111′),
Encryptbykey(Key_guid(‘Doctor1Key’), ‘Cut’))

INSERT INTO patientdata
VALUES      (2,
‘Jill’,
‘Doctor1′,
Encryptbykey(Key_guid(‘Doctor1Key’), ’2222222222′),
Encryptbykey(Key_guid(‘Doctor1Key’), ‘Bruise’))

INSERT INTO patientdata
VALUES      (3,
‘Jim’,
‘Doctor1′,
Encryptbykey(Key_guid(‘Doctor1Key’), ’3333333333′),
Encryptbykey(Key_guid(‘Doctor1Key’), ‘Head ache’))

In this example the Doc1 has 3 records for him. Next is to revert back to Doctor2 and do the same set of operations.

– Close all opened keys
CLOSE ALL symmetric keys

REVERT 

Impersonate as Doctor2 and do the same steps.

EXECUTE AS login = ‘Doctor2′

OPEN symmetric KEY doctor2key decryption BY certificate doctor2cert

–view the list of open keys in the session
SELECT *
FROM   sys.openkeys

INSERT INTO patientdata
VALUES      (4,
‘Rick’,
‘Doctor2′,
Encryptbykey(Key_guid(‘Doctor2Key’), ’4444444444′),
Encryptbykey(Key_guid(‘Doctor2Key’), ‘Cough’))

INSERT INTO patientdata
VALUES      (5,
‘Joe’,
‘Doctor2′,
Encryptbykey(Key_guid(‘Doctor2Key’), ’5555555555′),
Encryptbykey(Key_guid(‘Doctor2Key’), ‘Asthma’))

INSERT INTO patientdata
VALUES      (6,
‘Pro’,
‘Doctor2′,
Encryptbykey(Key_guid(‘Doctor2Key’), ’6666666666′),
Encryptbykey(Key_guid(‘Doctor2Key’), ‘Cold’))

CLOSE ALL symmetric keys

– View the list of open keys in the session
SELECT *
FROM   sys.openkeys

REVERT

Check on the data

let us do a simple select on the table to check how the values are stored.

– Select data and see values encrypted
SELECT *
FROM   patientdata 

As you can see the values are not visible as-is but has some garbage.

Impersonate as Doctor and show Values

The next step is to show that Doctor1 can see his data and Doctor2 can see his data alone. The steps are simple:

EXECUTE AS LOGIN = ‘Doctor1′

OPEN SYMMETRIC KEY doctor1key decryption DESCRIPTION BY CERTIFICATE doctor1cert

SELECT id,
name,
doctorname,
CONVERT(VARCHAR, Decryptbykey(uid))      AS UID,
CONVERT (VARCHAR, Decryptbykey(symptom)) AS Symptom
FROM   patientdata

CLOSE ALL SYMMETRIC keys

REVERT 

The output would be like:

Now let us impersonate as Doctor2 and check for values.

EXECUTE AS LOGIN = ‘Doctor2′

OPEN SYMMETRIC KEY doctor2key decryption DESCRIPTION BY CERTIFICATE doctor2cert

SELECT id,
name,
doctorname,
CONVERT(VARCHAR, Decryptbykey(uid))      AS UID,
CONVERT (VARCHAR, Decryptbykey(symptom)) AS Symptom
FROM   patientdata

CLOSE ALL SYMMETRIC keys

REVERT 

The output for this stage would be:

Conclusion

As you can see this is a very simple implementation of column level encryption inside SQL Server and can be quite effective to mask data from each others in a multi-tenant environment. There are a number of reasons one can go for this solution. I thought this will be worth a shout even though the implementation has been in industry for close to a decade now.

Continue reading...


 

Managed Databases on Cloud July 18th, 2014

Vinod Kumar

Recently my good friend and colleague Govind wrote about this topic on what are customers looking forward to when it comes to Cloud and working with Azure. The fundamental tenants that customers look at for cloud be it PaaS, SaaS or IaaS has been around:

  1. Reduced Maintenance headaches
  2. SLA backed for HA/DR
  3. Performance
  4. Synchronization with on-prem
  5. Security
  6. Backups
  7. No worry about hardware

and a few more. But for most parts the above fits the quizzing we get into. In a recent conversation, I had to outline some of the options when it comes to backup requirements with the customer which I thought is worth a share here. I am looking at this from an Azure standpoint:

For IaaS:

  1. You will need to use SQL Server Agent and build your maintenance plans that can automated. This can be scripted (powershell, TSQL or others) and done for all workloads.
  2. For SQL Server 2008 R2 CU2 onwards, we can use Backup to URL option wherein backups from Azure VM – SQL box we can point backups to a blob storage. I wrote about this a while back and you can try the same – http://blogs.extremeexperts.com/2014/04/14/sql-server-2014-backup-to-azure-blob/
  3. SQL Server 2014 also supports Encrypted backups to Blob and the same article shows the same. http://blogs.extremeexperts.com/2014/04/08/sql-server-2014-encrypted-backups/
  4. Also from SQL Server 2014 we have option to use Managed Automated backups configured. This will take backups automatically to  blob on a predefined time or based on workload pattern. Documentation for this can be found at: http://msdn.microsoft.com/en-us/library/dn449496.aspx

For PaaS:

  1. Since we already make sure of consistency in the Azure world, we dont have to worry on this.
  2. For Basic, Standard and Premium editions there are SLA for Point-In-Time recovery which is 7, 14 and 35 days respectively. You can read more about this at: http://msdn.microsoft.com/en-us/library/azure/jj650016.aspx . I highly recommend to use Powershell scripts to automate this, if you plan to use the them.
  3. In the past, I have also seen customers use Database Copy functionality to keep a copy of their database in a ready to use state every couple of days. This gives them an opportunity to go back to that version immediately without any problems. This is also an viable option if you like to use. http://blogs.msdn.com/b/sqlazure/archive/2010/08/25/10054109.aspx – Since point-in-time restores are available, I am more inclined to use that for cold standby and restores. Having said that, we can still use that feature for creating a copy for Dev, Test environments from our prod servers for testing.

These are my customer notes and I plan to start publishing these customer notes from time to time here in my blog. Since we are talking about Azure, I am sure some more additional capabilities and SLA’s can change over a period of time. So please keep an eye on the documentation for the latest values.

Continue reading...


 

Excel Tip: Month Name Sorting in Excel July 14th, 2014

Vinod Kumar

In my previous article Excel Tip: Month Name Sorting with PowerPivot / PowerView, I wrote about sorting of month name inside PowerPivot. I got a number of ping about the same functionality inside standard Excel sheets. I thought this would be the easiest and known to many – but to my surprise, not all know about the capability of Excel. So in this post, let me take you through the same process for Excel tables.

So let me prep you with the data first. I have two columns Month Name and Sales. When I try to sort by the Month Name, you can see how the sorting happens for A-Z and for Z-A. It is nowhere near to what one would expect to sort as month.

That bring us to an interesting option. It has been there all along, select the “Sort by Color” –> “Custom Sort…”.

This comes up with a small dialog which I am sure most of you would have used. There right at the order dropdown, is a hidden gem called as “Custom List…”. Select this to see the magic.

This bring a standard set of list or feel free to define your own New List for sorting and build the table.

Click on “OK” and see. This is awesome because now the list is sorted automatically back in your table. Now if we sort A-Z and Z-A to see the difference. So how easy and cool is it? Have you ever used this option before, let me know.

Continue reading...