Quantcast
Channel: Microsoft Azure Cloud Integration Engineering
Viewing all 33 articles
Browse latest View live

Cloud services are not available in this subscription

$
0
0

You recently got a permission for a given subscription on Azure and when you try to deploy a new cloud service on Visual Studio 2015 you are getting the error “Cloud Services are not available in this subscription” (Figure 1)?

Let’s fix it.

 

Cloud Services are not available in this subscription

Figure 1. Cloud Services are not available in this subscription

 

This error is caused because for Azure Service Management (ASM) resources you should be co-admin in order to deploy services.

 

If you look on the new portal, you can see that you have permission for publishing on this subscription but one thing to keep in mind is that this “Owner” role (Figure 2) is only valid for Azure Resource Manager resources. Even though you can manage Cloud Services on the new portal, it is an ASM resource and that’s why this role is not applicable for it. Roles configured on the new portal are RBAC and they are only valid for ARM resources.

Here you can find more information on ASM vs ARM resources: https://azure.microsoft.com/en-us/documentation/articles/resource-manager-deployment-model/

Owner RBAC role

Figure 2. Owner RBAC role

 

So, you should ask for the subscription an Admin to add your user as co-admin on the old portal.

If you do not know who is the subscription Admin you can login on the new portal.

Click on Subscriptions > Select your subscription > All settings (Figure 3)

 

Searching for subscription admin

Figure 3. Searching for subscription admin

 

Then Click on Users > Subscription admins > Assigned to > then you should see the subscription admin email (Figure 4).

 

Searching for subscription admin

Figure 4. Searching for subscription admin

 

Send an email to the administrator with the steps below.

Login to the old portal https://manage.windowsazure.com, click on Settings > Administrator tab > Add (Figure 5)

 

Add co-admin to the subscription

Figure 5. Add co-admin to the subscription

 

Enter the co-admin email address (Figure 6)

 

Enter co-admin email address

Figure 6. Enter co-admin email address

 

After you are added as co-admin you can go back to Visual Studio 2015, click on Refresh and you will be able to deploy your cloud service normally (Figure 7).

 

Refresh publishing Sign In

Figure 7. Refresh publishing Sign In

 

If you do not want to enable co-admin for this user, this user will not be able to upload the package straight from Visual Studio 2015 but you can enable RBAC Contributor role on the new portal then this user will be able to build the package locally and upload it through the new portal or by using command line to upload it.

To enable Contributor Role:

Go on the new portal > Subscriptions > All Settings > Users > Add > Select “Contributor” > add user email and click OK.

Then the user will be able to upload the package through the new portal and it will be uploaded successfully.

For command line publishing, you can go through this article:

https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-continuous-delivery/#4-publish-a-package-using-a-powershell-script

 


On deploying cloud service, users sometimes receives alerts notifying WAS has not been started

$
0
0

Problem: IIS fails to start, Throws the error : “The World Wide Web Publishing Service service depends on the Windows Process Activation Service service which failed to start because of the following error: The service has not been started.” in cloud service.

Symptom: w3wp service and WAS service crashes, after deployment of cloud service.

Resolution: It is a known issue and this may happen because an Internet Information Services (IIS)-based website may crash intermittently because a W3wp.exe process stops in Windows Server 2012 R2 when the operating system or an application tries to access a performance counter value during the shutdown process. This may cause system error events and prevent a graceful shutdown of the worker process.

This problem occurs because the memory that is allocated to the performance counter is already freed. One option to get rid of this issue is to install the Hotfix directly in your cloud service available here:

https://support.microsoft.com/en-in/kb/3048824

Or recommended way to install this to put this in your Start up task so it would be available each time when you redeploy your cloud service or it goes for auto scaling. Steps are given below

  1. Create a new folder of name StartUp in web role.
  2. Add following line of code in your service definition file inside your webrole tag.
 
<Startup>
<Task commandLine="StartUp\startup.cmd" executionContext="elevated" taskType="simple"/>
</Startup>
  1. Create a cmd file of name startup.cmd in StartUp folder.
  2. Right click on the above files added in StartUp folder, go to properties and set “Copy to output directory” as “Copy Always”.
  3. Download the hotfix file from the https://support.microsoft.com/en-in/kb/3048824 location and unzip the file and add the “1-KB3048824-v2-x64.msu” file in the Startup folder(As this file is in 1 MB in size, you can add this to your solution or If you don’t want to add this file in your solution then you can add this file to any shared location where it will be accessible to install).
  4. Add below command to your startup.cmd file:
         set startuptasklog=.\startuptasklog.txt
         set regkey=HKLM\SOFTWARE\Microsoft
         set regkeyvalue=1
         set regstring=SampleReg
         reg QUERY %regkey% /t REG_DWORD | Find "%regstring%"
         if %ERRORLEVEL%== 0 goto installed
            echo installing registry %regkey%\%regstring% >> %startuptasklog%
            REG ADD %regkey% /v %regstring% /t REG_DWORD /d %regkeyvalue%
            Startup\Windows8.1-KB3048824-v2-x64.msu /quiet
            if %ERRORLEVEL%== 0 (
           echo successfully added registry %regkey%\%regstring% >> %startuptasklog%
               )
            goto end
          :installed
              echo %regkey%\%regstring% is installed >> %startuptasklog%
                  :end
              echo startup task finished >> %startuptasklog%
  1. Publish the Cloud Service
  2. You can also check the “SampleReg” registry at the location “HKLM\SOFTWARE\Microsoft
  3. This task will restart your role first after installing this file on your machine. So it might take few minutes to start your instance.

If you will follow these steps it would help you to get rid of your issue if it matches with the problem described here.

To get to know more about this Please read https://support.microsoft.com/en-in/kb/3048824

It will give you complete Idea of it.


DISCLAIMER
Sample Code/Script is provided for the purpose of illustration only and is not intended to be used in a production environment.
THIS SAMPLE CODE/Script AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code,
provided that. You agree:
(i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded;
(ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and
(iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code

Azure Management Certificates and Publishing Setting file with CSP Subscriptions

$
0
0

In this post I will talk about a limitation of Azure CSP Subscriptions that makes users unable to work with Azure Management Certificates and Publishing Setting files.

 

Azure Management Certificates and Publishing Setting files (which is a file that contains the Management Certificates) are only intended and limited to manage Azure Service Management (ASM) resources, which means, resources from the Previous Azure Portal (https://manage.windowsazure.com). Please, see reference below:

 

What are management certificates?

Management certificates allow you to authenticate with the Service Management API provided by Azure classic. Many programs and tools (such as Visual Studio or the Azure SDK) will use these certificates to automate configuration and deployment of various Azure services. These are not really related to cloud services.

https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-certs-create

 

Note: If you don’t know how to create and upload Azure Management Certificate and download Azure Publishing Setting file, please visit Upload an Azure Management API Management Certificate and Get-AzurePublishSettingsFile.

 

So, what if I have a CSP account and want to use Azure Management Certificate or Azure Publishing Setting file? In order to answer this question, let’s first talk a little bit about Azure CSP Subscription.

What is CSP?

Here is the CSP definition from the official website:

The Microsoft Cloud Solution Provider program enables partners to directly manage their entire Microsoft cloud customer lifecycle. Partners in this program utilize dedicated in-product tools to directly provision, manage, and support their customer subscriptions. Partners can easily package their own tools, products and services, and combine them into one monthly or annual customer bill.

https://blogs.technet.microsoft.com/hybridcloudbp/2016/03/03/introduction-to-csp-model/

 

The thing is that, CSP Subscriptions have a limitation which is: They only have access to Azure Resource Manager (ARM) resources. Which means, only Resources that are only created in the new Azure portal (https://portal.azure.com/ ) and don’t appear in the previous Portal. Please reference below:

 

Difference of Azure CSP Subscriptions

To understand the nuances of Azure subscription migration to CSP, you need to understand what is the difference of Azure CSP Subscriptions comparing to Traditional Azure subscriptions and Azure EA subscriptions:

  • Only ARM services available – latest and greatest. No legacy ASM or “Classic” services, no “IaaSv1”.
  • Not all ARM services, available in Traditional/EA Azure subscriptions are available in CSP. But almost all of them.
  • Since there are no ASM services, there is no need in old Azure Portal

https://blogs.technet.microsoft.com/hybridcloudbp/2016/08/26/azure-subscription-migration-to-csp/

 

So, if you try to use either Azure Management Certificate or Azure Publishing Setting file in a CSP Subscription it won’t work, because CSP accounts only supports ARM resources and Management Certificate/Publishing Setting file are only for ASM resources. So it won’t be possible to use them to automate the management of your resources in Azure.

 

If you review the link Using Azure PowerShell with Azure Resource Manager you will get helpful information about how to manage the resources in the CSP account thru Azure PowerShell and we can also see the following note:

“The Resource Manager modules requires Add-AzureRmAccount. A Publish Settings file is not sufficient.”

 

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-deployment-model

https://msdn.microsoft.com/en-us/library/dn385850(v=nav.70).aspx

https://docs.microsoft.com/en-us/azure/azure-api-management-certs

Detect backend api response latency using APIM policy

$
0
0

Recently I had a query from a developer asking is there a way to identify response time by backend API by using Azure API management policy expression.
There is no direct policy expression which tracks the response time by backend API. However, we can use few policy expressions in Inbound and outbound to achieve the same.
For example, I used the set-variable policy expression to store current DateTime, one in inbound (i.e. request received time) and another in outbound (i.e. response received time) and then subtracted request received time from response received time to find the difference between the two which will be the time taken by the backend APIs.

Here’s how the policy configuration looks like:

<policies>
 <inbound>
 <set-variable name="Time1" value="@(DateTime.Now)" />
 <base />
 </inbound>
 <backend>
 <base />
 </backend>
 <outbound>
 <set-variable name="Time2" value="@(DateTime.Now-context.Variables.GetValueOrDefault<DateTime>("Time1"))" />
 <log-to-eventhub logger-id="neweventlogger">
 @( string.Join(",", DateTime.UtcNow, context.Deployment.ServiceName, context.RequestId, context.Request.IpAddress, context.Operation.Name,context.User.Email,context.Variables.GetValueOrDefault<TimeSpan>("Time2")) )
 </log-to-eventhub>
 <base />
 </outbound>
</policies>

This policy return string as:
’11/25/2016 12:31:39 PM,kalpitsinghtest.azure-api.net,bc9f3ace-39dd-43ba-a1c0-d65176210ce1,52.187.51.17,Create resource,user@domain.com,00:00:00.4233085

Let me know how it goes.

Happy Coding!

How to protect an API in Azure APIM using Azure Active Directory.

$
0
0

This article is intended to summarize a very common scenario where we would like to authenticate an API using OAuth2.0 authentication using azure AD and Azure API Management. There exist a simple and easy way to do the same. Here’re the steps,

We need to make two applications inside Azure Active Directory. In this case I am naming these as EchoBackend (your Backend API which is a web application) and Testclientapp(The Client API which is a native app). I am mentioning the steps assuming you are familiar in making applications in azure AD and with the Publisher and Developer portal of azure APIM.

 

  1. Make EchoBackend application with the help of following information: [Type: Web Application]
    1. Name: EchoBackend
    2. Sign-On-Url: https://NormalAuth.onmicrosoft.com/echo
    3. Client ID: xxxxxxxx-50ad-4a34-a4ed-63f7ff4ae762 (assuming)
    4. App Id Uri: https://kalpitsinghtest.azure-api.net/
    5. Add permissions: 1- read directory data 2- sign in and read user profile.

 

  1. Make Testclientapp application with following parameters: [Type: Native App]
    1. Name: Testclientapp
    2. Client ID: yyyyyyyy-5659-4cf6-87bd-ad8d176521d2 (assuming)
    3. redirectURI: https://kalpitsinghtest.portal.azure-api.net/docs/services/57d0xxxxxxxxx208bce4386c/console/oauth2/authorizationcode/callback (can be taken from the publisher portal with respect to your API or for time being it can be anything)
    4. Add Permission: 1- Access echo backend 2-Check the last permission.

 

  1. Change the security to OAuth2.0 in developer portal:
    1. Make Authorization end point url: https://login.microsoftonline.com/374bxxxxxxx-4b92-a9c9-8bea4b16f35a/oauth2/authorize
    2. Add one Additional body parameters named resource with value web app Uri-  resource : https://kalpitsinghtest.azure-api.net/
    3. Client authentication method: basic
    4. Clinet ID: yyyyyyyy-5659-4cf6-87bd-ad8d176521d2
    5. Authorization endpoint url and Token endpoint URL: as per AD tenant.
    6. Use the redirect_uri as testclientapp.

                          

Please refer following article documenting more details: https://azure.microsoft.com/en-in/documentation/articles/api-management-howto-protect-backend-with-aad/#configure-an-api-management-oauth-20-authorization-server

 

Hope this simplifies the understanding.

Happy Coding!

Can we use set-query-parameter policy inside send-request policy? -No

$
0
0

Several times you come across a situation where you should use send-request policy to make use of an external service to perform complex processing functions and return data to the API management service that can be used for further policy processing.

The send-request policy looks like this:


<send-request mode="new" response-variable-name="tokenstate" timeout="20" ignore-error="true">
<set-url>https://microsoft-apiappec990ad4c76641c6aea22f566efc5a4e.azurewebsites.net/introspection</set-url>
<set-method>POST</set-method>
<set-header name="Authorization" exists-action="override">
<value>basic dXNlcm5hbWU6cGFzc3dvcmQ=</value>
</set-header>
<set-header name="Content-Type" exists-action="override">
<value>application/x-www-form-urlencoded</value>
</set-header>
<set-body>@($"token={(string)context.Variables["token"]}")</set-body>
</send-request>

Now if you want to use query string parameters in the URL under set-url policy used above using set-query-parameter policy then it is not allowed and you will not be able to even save your configured policy. Generally, we get confused and try to make use of this policy inside send-request policy but set-query-parameter policy is used to add, replace value of, or deletes request query string parameter. It Can be used to pass query parameters expected by the backend service which are optional or never present in the request according to this link https://msdn.microsoft.com/library/azure/7406a8ce-5f9c-4fae-9b0f-e574befb2ee9#SetQueryStringParameter
Now the question is how to pass query string parameters? It’s very simple and logical, we can make use of APIM variables in this scenario.

Firstly, setting the variables with some values:

<set-variable name="fromDate" value="@(context.Request.Url.Query["fromDate"].Last())">
<set-variable name="toDate" value="@(context.Request.Url.Query["toDate"].Last())">

And secondly making use of these variables inside send-request policy:

<send-request mode="new" response-variable-name="revenuedata" timeout="20" ignore-error="true">
<set-url>@($"https://accounting.acme.com/salesdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
<set-method>GET</set-method></send-request>

Refer this link for more details https://github.com/Microsoft/azure-docs/blob/master/articles/api-management/api-management-sample-send-request.md

Please let me know your queries.

Happy Coding!

Troubleshooting: Azure Auto-scale profile does not change

$
0
0

Sometimes customers are faced with situation with Auto Scale, where they try to update or delete the Auto Scale Profile but the previous Profile keeps coming back.

 

Generally, this occurs due to duplicate profiles getting created for Auto Scaling. So, deleting the duplicate profile should help to mitigate the issue. Here’re steps to delete auto scale profile through https://resources.azure.com/:

 

  • Under the subscriptions, expand the subscription under which Cloud service is created, and then Expand the resourceGroups tab as below
  • Select and expand the Cloud Service that you are facing the Issue with.
  • Expand the provider’s tab and then the Microsoft.insights tab under provider’s tab as below, to list autoscale profiles created under cloud service
  • Select the obsolete profile and delete it using the tab on the main panel as shown below

 

NOTE: Deleting the profile will permanently remove the Auto-Scale profile. Please make sure that you have selected the correct Cloud service and the correct profile in it and then go for deletion. 

Following the above steps should resolve the Issue that you were facing. If the issue persists, you may want to open a support ticket with Microsoft Azure support team.

How to add missing resource types in azure

$
0
0

Some of you may get this error “Handling the error ‘The subscription ‘xxxxxx’ is not registered for resource types ‘Microsoft.ServiceBus/namespaces’. (Code: MissingRegistrationsForTypes)” when trying to move resource from one subscription to another subscription.

This error is expected if our destination subscription is not registered with the required types beforehand.

To overcome, it is suggested to first register the destination subscription with the required type for this movement. It is nothing but preparing the destination subscription for this movement, failing would through this error “The subscription ‘xxxxxx’ is not registered for resource types ‘Microsoft.xxxxx/namespaces’. (Code: MissingRegistrationsForTypes)” 

Resolution:

1.  Login to our new portal(portal.azure.com) with the destination subscription login credentials

2. As in the below image, click on “My permissions” under the user Id(top most right)

 

portal_permission

3.  Click on Resources provider, it will list the complete “Resources Provider Status” which one is registered and not.

4. Select the type which is required at the destination side and register.

Let’s say, for an example, if we require having  ‘Microsoft.Eventhub’ at destination side, then click on the “register” link to get this registered which is of one time process. 

rp_type

5. Once the registration is successful, we can move this resources between subscription.

Alternatively, we could also register this type through PowerShell.

Add-AzureRmAccount
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.EventHub

For more information about the command you can find in this link:
https://msdn.microsoft.com/en-us/library/mt603685.aspx/

             


How to reclaim your recently deleted ServiceBus namespace “name” instead 7 days wait time

$
0
0

When you have multiple subscriptions, sometimes you create namespace in some another subscription rather than the one in which you exactly wanted to create. Some of the customers delete the namespace and try to create the namespace in correct subscription by same name, however creating namespace it gives error saying, the namespace name is previously used in another project and has been deleted less than one week.

This is due to 7-day grace period when a namespace is deleted. Service Bus doesn’t allow someone to claim the namespace before 7 days from any other subscription.

This will get re-created immediately if it’s being created in the same subscription. If the deletion and re-creation is being done in two distinct subscription, then there is blocking period of 7 days and you need to wait for that.

To overcome this issue, you can follow the steps mentioned below:

You can move the Service Bus namespace from one subscription to another subscription.

Just make sure that the user is admin/owner for both source and destination subscriptions.

To move the resources from one subscription to another go to new Azure Portal(Ibiza) and re-create the namespace in the same subscription from which you have deleted it

Once Service bus namespace is created, select your Service Bus namespace, and go to Overview section, there under Essentials you will find an option of Subscription name (change) as shown in below image.

change subscription

 

move resource

 

It will take some time to complete the move resource operation, finally it will show notification once it gets completed as shown below.

 

notification

Note: Once you moved the resource to another subscription, please check in the destination Subscription and confirm if the resource is moved, as sometimes it ends up with failure error.

Hope it helps…

How to retrieve Job action settings of Scheduler Job using Powershell

$
0
0

Sometimes, we may have a requirement to retrieve the job action details of your scheduled job in the Scheduler job collections using Powershell.  

There are no direct options available in PowerShell to retrieve the job action details. You need to cast the jobAction to appropriate derived class of the Action settings to retrieve the details of the action like like Uri, Storage Account,Service Bus Queue etc., 

Here is the sample PowerShell script which retrieves the details like Job Uri, Storage Account, Service Bus Queue etc.,

 PS Script:

$schedulerJobs= Get-AzureRmSchedulerJobCollection -ResourceGroupName SchedulerIbiza -JobCollectionName testscheduler | Get-AzureRmSchedulerJob
foreach($job in $schedulerJobs)
{
if($job.JobAction.JobActionType -eq "StorageQueue")
 {
     $jobAction = [Microsoft.Azure.Commands.Scheduler.Models.PSStorageJobActionDetails]$job.JobAction;
     Write-Output 'Storage Account '
     Write-Output '----------------'
     Write-Output  $jobAction.StorageAccount;
 }
elseif(($job.JobAction.JobActionType -eq "ServiceBusQueue") -or ($job.JobAction.JobActionType -eq "ServiceBusTopic"))
 {
     $jobActionSB = [Microsoft.Azure.Commands.Scheduler.Models.PSServiceBusJobActionDetails]$job.JobAction;
     Write-Output 'Service Bus Queue '
     Write-Output '----------------  '
     Write-Output $jobActionSB.ServiceBusQueueName;
 }
 else
 {
    $jobActionHttp = [Microsoft.Azure.Commands.Scheduler.Models.PSHttpJobActionDetails]$job.JobAction;
    Write-Output 'Job Name '
    Write-Output '-------- '
    Write-Output $jobActionHttp.Uri;
  }
}

Below is the sample screenshot with the result:portalimages

How to fix Load Balancer not working in Round-robin fashion for your Cloud Service

$
0
0

Issue:

I recently came across a case where customer has an issue that the Load balancer of his cloud service is not working in round robin basis. He confirmed that by seeing more number of requests going to few instances whereas the other instances are getting very less number of requests. To analyse the issue, we have collected the IIS logs from all the instances( Customer has 7 instances) and when we parsed the logs, we could be able to confirm the same that 2 instances are getting very less number of requests than the other 5 instances. 

How Load Balancer works?

Load balancer uses 5 tuple algorithm to distribute client requests by default. This algorithm is a 5 tuple (source IP, source port, destination IP, destination port, protocol type) hash to map traffic to available servers. It provides stickiness only within a transport session. Packets in the same TCP or UDP session will be directed to the same datacenter IP (DIP) instance behind the load balanced endpoint. When the client closes and re-opens the connection or starts a new session from the same source IP, the source port changes and causes the traffic to go to a different DIP endpoint. 

If most of the load goes to a single instance, the number one reason is due to the testing client creating and reusing the same TCP connections. The Azure loadbalancer does round robin load balancing for new incoming TCP connections, not for new incoming HTTP requests. So when a client makes the first request to the cloudapp.net URL, the LB sees an incoming TCP connection and routes it to the next instance in the LB rotation, and then the TCP connection is established between the client and the server.  Depending on the client app, all future HTTP traffic from that client will may go over the same TCP connection or a new TCP connection. 

In order to balance traffic across other Azure role instances the client must break the TCP connection and reestablish a new TCP connection.  Load balancing HTTP requests would lead to existing TCP connections getting killed and new ones getting created.  TCP process creation is a resource intensive process hence reusing the same TCP channel for subsequent HTTP requests is an efficient use of the channel. 

If the client application is modified to make new TCP connection instead of HTTP requests (you can use multiple browser instances on the same client machine) then the TCP requests will end up on either Azure Instance in a round robin fashion. 

So depending on how the clients are establishing TCP connections, requests may be routed to the same instance.

Cause:

The Azure load balancer does round robin load balancing for new incoming TCP connections, not for new incoming HTTP requests.

By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. 

Resolution:

However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding. The following example shows how to do this using configuration. 

<?xml version=1.0encoding=utf-8?> 

<configuration> 

 <system.serviceModel> 

    <protocolMapping> 

      <add scheme=”http” binding=”customBinding” /> 

    </protocolMapping> 

    <bindings> 

      <customBinding> 

  

      <!– Configure a CustomBinding that disables keepAliveEnabled–> 

        <binding keepAliveEnabled=False/> 

  

      </customBinding> 

    </bindings> 

 </system.serviceModel> 

</configuration>   

Note: This is for a WCF application where multiple clients will make use of the services hosted in cloud.

 Useful Links:

Load Balancing – https://msdn.microsoft.com/en-us/library/ms730128(v=vs.110).aspx

 

Azure Blob Storage operations with Storage Python SDK

$
0
0

This blog describes how to perform the basic operations on blobs using the Python API. We’ll be using Python API provided in Azure SDK to achieve the following functionalities.

  • Create a container
  • Upload a blob into a container
  • Download blobs
  • List the blobs in a container
  • Delete a blob

Installing the SDK:

My machine is a 64-bit windows machine, so I downloaded the “Windows x86-64 executable installer” available under the latest version of Python for windows from here.

Then I downloaded Azure storage SDK for Python. In order to do that, we need to follow any of the below 3 steps.

Option 1: Via PyPi

To install via the Python Package Index (PyPI), type:

pip install azure-storage

Option 2: Source Via Git

To get the source code of the SDK via git just type:

git clone git://github.com/Azure/azure-storage-python.git

cd ./azure-storage-python

python setup.py install

Option 3: Source Zip

Download a zip of the code via GitHub or PyPi. Then, type:

cd ./azure-storage-python

python setup.py install

Create a Container:

Every blob in Azure storage must reside in a container. Please make sure that the following naming convention has been followed for a container.

  • Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
  • Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
  • All letters in a container name must be lowercase. Please note that the name of a container must always be lowercase. If you include an upper-case letter in a container name, or otherwise violate the container naming rules, you may receive a 400 error (Bad Request).
  • Container names must be from 3 through 63 characters long.

By default, the new container that is created is private. “public_access=PublicAccess.Container” is used in the code below to set the container access policy to “Container”.  Alternatively, you can modify a container after you have created it using “set_container_acl” method in “blob_service” object.

Code used to create a container:

from azure.storage.blob import BlockBlobService

from azure.storage.blob import PublicAccess

# Setting Parameters

ACCOUNT_NAME = “<Your Storage account name>”

ACCOUNT_KEY = “<Your storage account key>”

CONTAINER_NAME = “<Container name you want to create>”

# Code block to create a container

blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)

try:

container_status = blob_service.create_container(CONTAINER_NAME, public_access=PublicAccess.Container)

print(“Container %s”%CONTAINER_NAME + ” creation success status: %s”%container_status)

except:

print(“Container %s”%CONTAINER_NAME + ” creation failed”)

Screenshot:

 

Create Container

 

Upload a blob into a container:

To create a block blob and upload data, use the create_blob_from_path, create_blob_from_stream, create_blob_from_bytes or create_blob_from_text methods. They are high-level methods that perform the necessary chunking when the size of the data exceeds 64 MB. create_blob_from_path uploads the contents of a file from the specified path, and create_blob_from_stream uploads the contents from an already opened file/stream. create_blob_from_bytes uploads an array of bytes, and create_blob_from_text uploads the specified text value using the specified encoding (defaults to UTF-8).

Code used to upload blob into a container:

from azure.storage.blob import BlockBlobService

from os import listdir

from os.path import isfile, join

# Setting Parameters

ACCOUNT_NAME = “<Your Storage account name>”

ACCOUNT_KEY = “<Your storage account key>”

CONTAINER_NAME = “<Your storage container name>”

LOCAL_DIRECT = r”<Local directory containing files>”

# Code block to upload blob to Blob storage

blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)

local_file_list = [f for f in listdir(LOCAL_DIRECT) if isfile(join(LOCAL_DIRECT, f))]

file_num = len(local_file_list)

for i in range(file_num):

local_file = join(LOCAL_DIRECT, local_file_list[i])

blob_name = local_file_list[i]

try:

blob_service.create_blob_from_path(CONTAINER_NAME, blob_name, local_file)

print(“File upload successful %s”%blob_name)

except:

print (“Something went wrong while uploading the files %s”%blob_name)

Screenshot:

 

Upload Blobs

 

Download Blobs:

To download data from a blob, use get_blob_to_path, get_blob_to_stream, get_blob_to_bytes, or get_blob_to_text. There are high-level methods that perform the necessary chunking when the size of the data exceeds 64 MB.

Code used to download blob to a local directory:

from azure.storage.blob import BlockBlobService

# Setting Parameters

ACCOUNT_NAME = “<Your Storage account name>”

ACCOUNT_KEY = “<Your storage account key>”

CONTAINER_NAME = “<Your storage container name>”

BLOB_TO_BE_DOWNLOADED = “<Your Blob Name>”

# Code block to download blobs

blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)

try:

blob_service.get_blob_to_path(CONTAINER_NAME, BLOB_TO_BE_DOWNLOADED , file_path = “<Local Path>\<File Name>.<extension of the original blob>”)

print(“Blob has been downloaded”)

except:

print(“Blob download failed”)

Screenshot:

 

Download Blobs

 

List the blobs in a container:

To list the blobs in a container, use the list_blobs method.

Code used to list blobs in a container:

from azure.storage.blob import BlockBlobService

# Setting Parameters

ACCOUNT_NAME = “<Your Storage account name>”

ACCOUNT_KEY = “<Your storage account key>”

CONTAINER_NAME = “<Your storage container name>”

# Code block to list the blobs present in a container

blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)

try:

blob_names = blob_service.list_blobs(CONTAINER_NAME)

for blob in blob_names:

print(blob.name)

except:

print(“Blob listing failed”)

Screenshot:

 

List Blobs in Container

 

Delete a blob:

To delete a blob, we leverage delete_blob method.

Code used to delete a blob:

from azure.storage.blob import BlockBlobService

# Setting Parameters

ACCOUNT_NAME = “<Your Storage account name>”

ACCOUNT_KEY = “<Your storage account key>”

CONTAINER_NAME = “<Your storage container name>”

BLOB_TO_BE_DELETED = “<Blob name with extension>”

# Code block to delete a blob

blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)

try:

blob_service.delete_blob(CONTAINER_NAME, BLOB_TO_BE_DELETED)

print(“Blob has been deleted successfully”)

except:

print(“Blob deletion failed”)

Screenshot:

 

Delete Blobs

 

Articles referred:

Introduction on azure storage with Python: https://docs.microsoft.com/en-us/azure/storage/storage-python-how-to-use-blob-storage

Azure Storage SDK for Python: https://github.com/Azure/azure-storage-python

Troubleshooting Performance Issues with Cloud Services (PaaS) – Data Collection

$
0
0

Collecting right data at the right time is the most critical part of troubleshooting any sort of Performance issues, especially when the issue is intermittent and you can't reproduce this at will. There are many tools available to collect this data but you need to know which one to use when and how. In this blog post we have tried to summarize what, when and how of Data Collection for troubleshooting various Performance scenarios. Use the table below to find what kind of issue you are dealing with and then follow the link to understand when and how.

You are troubleshooting: Data collection when problem:
  can be reproduced easily is intermittent
Crash ProcDump manual Debug Diag Automated crash rule
Hang ProcDump consecutive Debug Diag Automated Hang Rule
Slow Response time IIS Logs IIS Logs
Freb Logs Freb Logs
ProcDump manual DebugDiag Automate Slow response
PerfView manual
High CPU ProcDump manual Procdump automated
PerfView manual PerfView automated
High Memory PerfView manual Debug Diag Automated high memory

You would need one or more of these tools for collecting the logs depending up on the scenario:

Tool 1: Procdump  https://docs.microsoft.com/en-us/sysinternals/downloads/procdump

Tool 2: Perfview : http://www.microsoft.com/en-us/download/details.aspx?id=28567

Tool 3: Debug Diag: https://www.microsoft.com/en-us/download/details.aspx?id=49924

 

ProcDump manually

Following command can be used to attach the debugger to a process of which you want to capture the memory dump:

C:\Tools\procdump> procdump.exe -ma -s 30 -n 3 <PID> <OutputFolder>

- ma: Write a dump file with all process memory. The default dump format only includes thread and handle information.
-s: Consecutive seconds before dump is written (default is 10).
-n: Number of dumps to write before exiting.
-s:  Seconds between consecutive dumps. Change the parameter based on the slowness.
<PID>: replace this with the IP of the process for which you want to capture the dump. You can use Task Manager to get PID of the process.
<outputFolder>: Location where dump should be stored.

 

ProcDump automated / consecutive

Following command will capture 3 consecutive memory dumps of process with id 5844 each after 5 seconds when CPU of that process reach 70% and save the dumps at c:\dumps\

C:\Tools\procdump> procdump -ma -c 70 -s 5 -n 3 5844 c:\dumps\

-ma: Write a dump file with all process memory. The default dump format only includes thread and handle information.
-c: CPU threshold at which to create a dump of the process.
-s: Consecutive seconds before dump is written (default is 10). Change the parameter based on slowness
-n: Number of dumps to write before exiting.

 

IIS Logs

Location for collecting IIS logs
C:\Resources\Directory\{DeploymentID}.{Rolename}.DiagnosticStore\LogFiles\Web
Collect the log files which are relevant to the time of issue

 

Freb Logs

Location for Collecting FREB logs
C:\Resources\Directory\{DeploymentID}.{Rolename}.DiagnosticStore\FailedReqLogFiles

NOTE: This is not turned on by default in Windows Azure and is not frequently used.  But if you are troubleshooting IIS/ASP.NET specific issues you should consider turning FREB tracing on in order to get additional details. How to enable Failed Request Tracing

 

PerfView manually

  • Open the perfview tool, go to Collect Menu and click on Collect option
  • Select Zip, Merge, thread time check box as below and Click on Start Collection.

  • Leave it for few seconds and stop the collection. (It will take a minute or so to create a compressed file)

 

PerfView automated

Following command can be used to capture 5 Perfview logs, 1000 MB each for w3wp.exe when it reached 60% CPU

C:\Tools\perfview> Perfview /NoGui collect "/StartOnPerfCounter=Process:% Processor Time:w3wp>60" -ThreadTime -CircularMB:1000 -CollectMultiple:5 –accepteula
 

-ThreadTime: Is a parameter that monitors the CPU Thread Time
-accepteula: specifies to automatically accept the Microsoft Software License Terms.
 
Example:

If you're seeing 80% to 90% CPU utilization by w3wp.exe then this threshold value should be 80.

 

Ok, I have captured the data. What do I do with that?

Following are few helpful guidance to assist you in doing some initial analysis from the memory dumps/PerfView logs. Or you can always reach out to Microsoft Azure Support for additional help:

How to analyze a memory dump using Debug Diag

Process Crash analysis from a memory dump

How to analyze Memory Leak with memory dump using  WinDBG

How to use PerfView to diagnose Memory Leak

Channel9 Tutorial on Memory investigation using PerfView

Channel9 Tutorial on CPU Performance investigation using PerfView

Viewing all 33 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>