Quantcast
Channel: Microsoft Azure Cloud Integration Engineering
Viewing all 33 articles
Browse latest View live

Service Bus Error: The maximum entity size has been reached or exceeded for Topic

$
0
0

One of my customer suddenly started seeing the following error message: 

Microsoft.ServiceBus.Messaging.QuotaExceededException
Message: The maximum entity size has been reached or exceeded for Topic: 'xxx-xxx-xxx'. Size of entity in bytes:1073742326, Max entity size in bytes:

1073741824..TrackingId:xxxxxxxxxxxxxxxxxxxxxxxxxx, TimeStamp:6/30/2013 7:50:18 AM

 

What went wrong?
Luckily this error was self-explanatory and indicating that the Max Topic size is 1 GB (1073741824 bytes) while the Topic is already utilized 1073742326 bytes which translates to approx 1 GB limit.  We logged into the Azure Portal and verified the same.  Next we ran Service Bus Explorer to check the status of the messages. We saw thousands of messages are in the Topics.


Why messages were not getting cleared from the Topic?
Now the obvious question arise, what is keeping those thousands of message in topic. After some investigation we found that "Default Message Time to Live" for Topic was set to very high number. In this case it was set to 1 year. In order to fix the issue, we set  "Default Message Time to Live" to 2 days as recommended by the their App Developer.  This resolved the issue. Here is a screen-shot of the configuration from Azure Management Portal:


Service Bus Brokered Messaging: Unexpected Error Sending Messages Larger than 256K

$
0
0

When sending a message to a Azure Service Bus Brokered message queue or topic, with the .Net client library (Microsoft.ServiceBus.dll) and, the size of the message exceeds 256KB (current max message size documented under Windows Azure Service Bus Quotas), you will see the below error. The same error reoccurs when attempting to resend the message.

 

Microsoft.ServiceBus.Messaging.MessagingCommunicationException was unhandled

  HResult=-2146233088

  Message=Error during communication with Service Bus. Check the connection information, then retry.

  Source=Microsoft.ServiceBus

  IsTransient=true

  StackTrace:

       at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

       at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

       at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.RunSynchronously()

       at Microsoft.ServiceBus.Messaging.MessageSender.OnSend(TrackingContext trackingContext, IEnumerable`1 messages, TimeSpan timeout)

       at Microsoft.ServiceBus.Messaging.MessageSender.Send(TrackingContext trackingContext, IEnumerable`1 messages, TimeSpan timeout)

       at Microsoft.ServiceBus.Messaging.MessageSender.Send(BrokeredMessage message)

       at Microsoft.ServiceBus.Messaging.QueueClient.Send(BrokeredMessage message)

       at repro.Program.Main(String[] args)

       at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)

       at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)

       at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()

       at System.Threading.ThreadHelper.ThreadStart_Context(Object state)

       at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)

       at System.Threading.ThreadHelper.ThreadStart()

  InnerException: System.ServiceModel.CommunicationObjectFaultedException

       HResult=-2146233087

       Message=Internal Server Error: The server did not provide a meaningful reply; this might be caused by a premature session shutdown..TrackingId:52a267de-d987-4b31-b86f-4ee7bb9de012, Timestamp:11/7/2013 3:27:02 AM

       Source=Microsoft.ServiceBus

       StackTrace:

         Server stack trace:

         Exception rethrown at [0]:

            at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

            at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Sbmp.DuplexRequestBindingElement.DuplexRequestSessionChannel.DuplexCorrelationAsyncResult.End(IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Sbmp.DuplexRequestBindingElement.DuplexRequestSessionChannel.EndRequest(IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Channels.ReconnectBindingElement.ReconnectChannelFactory`1.RequestSessionChannel.RequestAsyncResult.<GetAsyncSteps>b__4(RequestAsyncResult thisPtr, IAsyncResult r)

            at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)

         Exception rethrown at [1]:

            at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

            at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

            at Microsoft.ServiceBus.Common.AsyncResult`1.End(IAsyncResult asyncResult)

            at Microsoft.ServiceBus.Messaging.Channels.ReconnectBindingElement.ReconnectChannelFactory`1.RequestSessionChannel.EndRequest(IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Sbmp.RedirectBindingElement.RedirectContainerChannelFactory`1.RedirectContainerSessionChannel.RequestAsyncResult.<>c__DisplayClass17.<GetAsyncSteps>b__a(RequestAsyncResult thisPtr, IAsyncResult r)

            at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)

         Exception rethrown at [2]:

            at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

            at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

            at Microsoft.ServiceBus.Common.AsyncResult`1.End(IAsyncResult asyncResult)

            at Microsoft.ServiceBus.Messaging.Sbmp.RedirectBindingElement.RedirectContainerChannelFactory`1.RedirectContainerSessionChannel.EndRequest(IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Channels.ReconnectBindingElement.ReconnectChannelFactory`1.RequestSessionChannel.RequestAsyncResult.<GetAsyncSteps>b__4(RequestAsyncResult thisPtr, IAsyncResult r)

            at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)

         Exception rethrown at [3]:

            at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

            at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

            at Microsoft.ServiceBus.Common.AsyncResult`1.End(IAsyncResult asyncResult)

            at Microsoft.ServiceBus.Messaging.Channels.ReconnectBindingElement.ReconnectChannelFactory`1.RequestSessionChannel.EndRequest(IAsyncResult result)

            at Microsoft.ServiceBus.Messaging.Sbmp.SbmpTransactionalAsyncResult`1.<GetAsyncSteps>b__3b(TIteratorAsyncResult thisPtr, IAsyncResult a)

            at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)

         Exception rethrown at [4]:

            at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)

            at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)

            at Microsoft.ServiceBus.Common.AsyncResult`1.End(IAsyncResult asyncResult)

            at Microsoft.ServiceBus.Messaging.Sbmp.SbmpMessageSender.EndSendCommand(IAsyncResult result)

       InnerException:

 The above errors results since the Service Bus service endpoint the Brokered Messaging client talks to, is configured to support a maximum messages size of 256KB.

To protect against this error it is recommend to check the size of the message prior to sending it to Service Bus. Note BrokeredMessage.Size property is only accurate after a Send operation. This is because the size that is checked is the size of the wire message and not the size of the Object type serialized into BrokeredMessage. Since we cannot calculate the underlying the wire message size prior to sending, ensure your object size slightly less the 256KB – 250KB is recommended.

Manage Service Bus Connectivity Mode

$
0
0

How do we manage Service Bus Connectivity Mode while hosting WCF Service with Service Bus Endpoints on IIS?

Hosting WCF Services with Service Bus Endpoints on IIS

http://msdn.microsoft.com/en-us/library/windowsazure/hh966775.aspx

Relevant Facts and Documentaion:

  • Service Bus Listener uses AutoDetect mode by default, however you can set it to other modes explicitly. (Source: http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.servicebus.connectivitymode.aspx).
  • Auto-detect mode probes the connectivity options in the order - TCP, HTTP and HTTPS. HTTPS is only supported for Service Bus Relay till date (Microsoft.ServiceBus.dll 1.8 and newer), not for Brokered Messaging. The default mode for brokered messaging is TCP and if HTTP is required, you have to explicitly specify it.
  • The Service Bus Listener gets created when WCF Service Host is opened. All this code is instrumented for your convenience by the WCF assemblies when hosted in IIS. However, you can intercept this and customize by overriding the CreateServiceHost method in the WCF ServiceHostFactory class.

 Solution:

  • For the scenario, described above - you can use configuration appsettings to manage connectivity mode and alter it by overriding CreateServiceHost.

 <configuration>

……………

  <appSettings>

    <add key="connMode" value="http"/>

  </appSettings>

………………

</configuration>

public class CustomServiceFactory : ServiceHostFactory {

protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)

{

            return new SelfDescribingServiceHost(serviceType, baseAddresses);

 }

 }

class SelfDescribingServiceHost : ServiceHost

    {

        public SelfDescribingServiceHost(Type serviceType, params Uri[] baseAddresses)

            : base(serviceType, baseAddresses) { }

 

        //Overriding ApplyConfiguration() allows us to alter the ServiceDescription prior to opening

        //the service host.

        protected override void ApplyConfiguration()

        {

            base.ApplyConfiguration();

            string cmode = ConfigurationManager.AppSettings.Get("connMode");

            if (cmode == "http")

                ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http;

            else

                ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Tcp;

        }

    }

  • Also, make sure you WCF service page is made aware of it by changing the page as follows.

From:

<%@ ServiceHost Language="C#" Debug="true" Service="<namespace>.Service1" CodeBehind="Service1.svc.cs" %>

To:

<%@ ServiceHost Language="C#" Debug="true" Service="<namespace>.Service1" Factory="<namespace>.CustomServiceFactory" %>

 

Send Messages to Service Bus Queue from REST Clients

$
0
0

I've had customers come to me asking how to make interop scenario work in Service Bus Messaging. One such scenario would be to send messages from REST client to Service Bus Listener listening over NetMessagingBinding that expects a binary encoded message.

Relevant Facts and Documentaion:

Solution:

  • On sender side -  make sure the message version is set properly, contentType is set on the brokered message, body of the brokered message is sent as stream and appropriate http headers are added.

Here's some sample code.

       string interopPayload = "<Record xmlns='" + Constants.ContractNamespace + "'><Id>" + i + "</Id></Record>";

                WebClient RESTClient = new WebClient();

                Random rand = new Random();

                string sessionName = rand.Next(SampleManager.NumSessions).ToString();

                // Creating BrokeredMessageProperty

                BrokeredMessageProperty property = new BrokeredMessageProperty();

                soapBody = interopPayload;

                property.Label = soapBody;

                property.ContentType = "application/soap+msbin1"; 

               

MessageVersion _messageVersion = MessageVersion.Soap12WSAddressing10;

                 // Creating message and adding BrokeredMessageProperty to the properties bag

                Message message = Message.CreateMessage(_messageVersion, "SoapAction", soapBody);

                 message.Properties.Add(BrokeredMessageProperty.Name, property);

 

                MemoryStream outStream = new MemoryStream();

                XmlDictionaryWriter binaryWriter = XmlDictionaryWriter.CreateBinaryWriter(outStream);

                XmlDocument doc = new XmlDocument();

                doc.LoadXml(message.ToString());

                doc.WriteContentTo(binaryWriter);

                binaryWriter.Flush();

                string binaryXmlAsString = Encoding.UTF8.GetString(outStream.ToArray());

                BrokerProperties bpts = new BrokerProperties();

                bpts.CorrelationId = "CorrelationId-" + i.ToString();        

                RESTClient.Headers["Authorization"] = "WRAP access_token=\"" + authorizationToken + "\"";

                RESTClient.Headers["BrokerProperties"] = bpts.Serialize();

                string sendAddress = "";

                sendAddress = serviceAddress + queueName + "/Messages";

                byte[] response = RESTClient.UploadData(sendAddress, "POST", outStream.ToArray());

                   

 

  • On the receiver side, the code is pretty straight forward except for an extra step of reading the streamed message.

            NetMessagingBinding messagingBinding = new NetMessagingBinding("messagingBinding");

            EndpointAddress address = SampleManager.GetEndpointAddress(queueName, serviceBusNamespace);

            TransportClientEndpointBehavior securityBehavior = new TransportClientEndpointBehavior();

            securityBehavior.TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(serviceBusIssuerName, serviceBusIssuerKey);

 

            IChannelListener<IInputChannel> inputChannelListener = null;

            IInputChannel inputChannel = null;

            try

            {

                inputChannelListener = messagingBinding.BuildChannelListener<IInputChannel>(address.Uri, securityBehavior);

                inputChannelListener.Open();

                inputChannel = inputChannelListener.AcceptChannel();

                inputChannel.Open();

 

                while (true)

                {

                    try

                    {

                        // Receive message from queue. If no more messages available, the operation throws a TimeoutException.

                        Message receivedMessage = inputChannel.Receive(receiveMessageTimeout);

                        SampleManager.OutputMessageInfo("Receive", receivedMessage); 

                    }

                    catch (TimeoutException)

                    {

                        break;

                    }

                }

 

                // Close

                inputChannel.Close();

                inputChannelListener.Close();

 

……………………………….

 

 public static void OutputMessageInfo(string action, Message message, string additionalText = "")

{

            lock (typeof(SampleManager))

            {

 

                BrokeredMessageProperty property = (BrokeredMessageProperty)message.Properties[BrokeredMessageProperty.Name];

 

                XmlDictionaryReader reader = message.GetReaderAtBodyContents();

 

                string result = reader.ReadInnerXml();

                Console.WriteLine(result);

                Console.ResetColor();

            }

}

 

I've attached the code, it is based on WCFChannelSessionSample in BrokeredMessaging scenario in the Service Bus SDK Samples - http://servicebus.codeplex.com/

Just make sure you edit your service bus issuer and key information, and the sample should run as it is. The following variables needs to be edited/updated with your namespace specifics in Receiver.cs, Sender.cs and SampleManager.cs

            serviceBusNamespace = "*****************";

            serviceBusIssuerName = "*****************";

            serviceBusIssuerKey = "*****************";

 

 

 

Send Messages from TopicClient to WCF Subscription Service

$
0
0

How to send Message to ServiceBus topic using TopicClient and receive it with WCF as Subscription Service that acts as a Subscription client, without the use of SubscriptionClient classes of Service Bus?

Relevant Facts and Documentaion:

Solution:

Since the sender and receiver are based on two different technologies we have to make sure the encoding on both sides match and a WCF data contract is defined. Please follow the steps below. Also, here's a good blog where Abhishek (from Service Bus product group) talks about different ways of formatting the content for Service Bus Messages - http://abhishekrlal.com/2012/03/30/formatting-the-content-for-service-bus-messages/

  • Define the data contract

   static class Constants

    {

        public const string ContractNamespace = "urn:wcsubscriptionservice";

    }

 

    [DataContract(Namespace = Constants.ContractNamespace)]

    public class TestMessage

    {

        [DataMember]

        public string MsgNumber { get; set; }

        [DataMember]

        public string MsgContent { get; set; }

 

    }

  •  Format the brokered message when sending from client.

    BrokeredMessage message = new BrokeredMessage(new TestMessage() { MsgNumber = 1, MsgContent = "Test Message" }, new DataContractSerializer(typeof(TestMessage)));

                     // Send message to the topic

                     topicClient.Send(message);

  • Set the ListenURI in WCF Service:

……………………       

private static Uri serviceBusEndpointAddress = new Uri("sb://<namespace>.servicebus.windows.net/<topic>");

        private static Uri subscriptionUri = new

Uri("sb://<namespace>.servicebus.windows.net/<topic>/subscriptions/<subscription>");

 

……………………….

           var endpoint = new ServiceEndpoint(contract, binding, new

EndpointAddress(serviceBusEndpointAddress.AbsoluteUri));

            endpoint.ListenUri = subscriptionUri;

            endpoint.Behaviors.Add(transportBehavior);

            endpoint.Name = "ReceiveMessage";

 

  • Manually Control the Receive Context, if you would like the user code to control the incoming message for any further processing purposes.

 [ServiceContract(Namespace="urn:wcsubscriptionservice")]

    public interface IServiceBusReader

    {

        [OperationContract(IsOneWay = true, Action = "*"), ReceiveContextEnabled(ManualControl = true)]

        void ReceiveMessage(TestMessageContract someMsg);

    }

     

    [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

    public class ServiceBusReader : IServiceBusReader

    {

        public void ReceiveMessage(TestMessageContract someMsg)

        {

            TestMessage msg = someMsg.TestMessage;

            var incomingProperties = OperationContext.Current.IncomingMessageProperties;

            var property = incomingProperties[BrokeredMessageProperty.Name] as BrokeredMessageProperty;

            ReceiveContext receiveContext;

            if (ReceiveContext.TryGet(incomingProperties, out receiveContext))

            {

                receiveContext.Complete(TimeSpan.FromSeconds(10.0d));

            }

            else

            {

                throw new InvalidOperationException("...");

            }        

        }

    }

 

I've attached the sample code, please feel free to use it. Just make sure, you input the values for service bus namespace, key, topic name and subscription name.

Files: TopicSender.cs, WCFSubscriptionService.cs

Azure Service Bus AMQP Using Java SDK : Peer did not create remote endpoint for link, target: amqp_queue

$
0
0

 

While setting up an Azure Service Bus AMQP Java project in Eclipse by following the code from How to Use JMS with AMQP 1.0 in Azure with Eclipse I continuously got the following error

javax.jms.JMSException: Peer did not create remote endpoint for link, target: amqp_queue_portal at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:77) at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:348) at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:63) at SimpleSenderReceiver.<init>(SimpleSenderReceiver.java:41) at SimpleSenderReceiver.main(SimpleSenderReceiver.java:59)

Caused by: org.apache.qpid.amqp_1_0.client.Sender$SenderCreationException: Peer did not create remote endpoint for link, target: testqueue at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:171) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:104) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:97) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:83) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:69) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:63) at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:74) at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:66) at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:72)

The Queue in this case was created from the Azure Management Portal. A search on internet pointed to a lot of hits on stackoverflow but none of them seemed to provide a conclusive answer. So I debugged the Java code and read through some of the AMQP documentation at

https://apache.googlesource.com/qpid/+/c8d0fb167d8fc89fcb27823414454675b60a9dc1/qpid/java/amqp-1-0-client/src/main/java/org/apache/qpid/amqp_1_0/client/Sender.java

http://msdn.microsoft.com/en-us/library/azure/hh780773.aspx

Later I created a Queue using code instead of the Management Portal and with this new queue the Java code worked fine.

connectionString  =     CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager =
    NamespaceManager.CreateFromConnectionString(connectionString);
 
            if (!namespaceManager.QueueExists("amqp_queue_code"))
            {
                namespaceManager.CreateQueue("amqp_queue_code");
            }
 

So I used Service Bus Explorer to find the property difference of the queues  amqp_queue_portal & amqp_queue_code. Found that it fails if the Queue is “Partitioned”. AMQP seems to need message ordering. If I create a queue from portal by quick create it will create a ”Partitioned“ queue by default. So when you create a queue from portal Select > Custom Create > Un-check the ”Enable Partitioning“. It should look as below.

clip_image001

I am able to get the messages now using the Java AMQP code published at http://azure.microsoft.com/en-us/documentation/articles/service-bus-java-how-to-use-jms-api-amqp/

clip_image003

Hope this blog helps you overcome the javax.jms.JMSException: Peer did not create remote endpoint for link, target: amqp_queue_portal error.

Angshuman Nayak

Cloud Integration Engineering

Troubleshooting Scenario – High CPU usage on PaaS Roles with the same load after a running for a few hours

$
0
0

 

I had this interesting issue reported where the instance count would increase from 10 to 50 over the course of the month. This used to happen with the exact similar load and number of users. This was really perplexing for the customer’s Azure application developers and hence they reported the issue to Microsoft Azure Support.

At the very first analysis we found they used to manually increase the instance count as the existing instances used to hit CPU around 90%  and forming a plateau there, and hence becoming more or less unresponsive. So it was relatively simple to isolate the cause of the increase in instance count. The crucial thing was to find the cause of the high CPU. 

High CPU on the instance is generally caused by the application code. So we took memory dump on one of the instance when it gets into high CPU situation.  The way we analyze is like the one I did for a situation detailed in this blog http://blogs.msdn.com/b/cie/archive/2013/11/28/windows-azure-worker-role-showing-high-cpu.aspx

Process to collect dumps  

  a) RDP to the instance running the Cloud Service.  
  b) From task manager check which process is taking most CPU and is staying there without coming down.
  c) Right click and collect a Full Crash Dump and repeat this every 1 minute for say 5 minutes. So we will have 5 dump files.
  d) Once you have the files you can analyze it or create a ticket with Microsoft for an engineer to help analyze.

In this case it was W3WP process so we collected process dumps on it. In the first 2 memory dumps I didn’t find any high CPU.

Dump Analysis  

I am not delving into the details on how I did it as it merits a separate discussion. In all the dumps I see the CPU moving between 81% to 100%. Most of the calls that are stuck are as following

SP               IP               Function                                                                                                                                                                                                                                                        Source
00000011a7ff9008 0000000000000000 HelperMethodFrame                                                                                                                                                                                                                                               
00000011a7ff9150 00007ff90caf1177 Microsoft.Data.OData.DuplicatePropertyNamesChecker.CheckForDuplicatePropertyNames(Microsoft.Data.OData.ODataProperty)                                                                                                                                           
00000011a7ff91b0 00007ff90caee6d8 Microsoft.Data.OData.Atom.ODataAtomPropertyAndValueDeserializer.ReadPropertiesImplementation(Microsoft.Data.Edm.IEdmStructuredType, System.Collections.Generic.List`1<Microsoft.Data.OData.ODataProperty>, Microsoft.Data.OData.DuplicatePropertyNamesChecker,  
00000011a7ff9240 00007ff90caede16 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadAtomContentElement(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                        
00000011a7ff92c0 00007ff90caec553 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadAtomElementInEntry(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                        
00000011a7ff9300 00007ff90caec2a2 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadEntryContent(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                              
00000011a7ff9370 00007ff90cae9665 Microsoft.Data.OData.Atom.ODataAtomReader.ReadEntryStart()                                                                                                                                                                                                      
00000011a7ff93e0 00007ff90caf2ef6 Microsoft.Data.OData.Atom.ODataAtomReader.ReadAtEntryEndImplementation()                                                                                                                                                                                        
00000011a7ff9430 00007ff90cae88df Microsoft.Data.OData.ODataReaderCore.ReadImplementation()                                                                                                                                                                                                       
00000011a7ff9480 00007ff90cae8727 Microsoft.Data.OData.ODataReaderCore.InterceptException[[System.Boolean, mscorlib]](System.Func`1<Boolean>)                                                                                                                                                     
00000011a7ff94f0 00007ff90cca40a6 Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpResponseParsers.TableQueryPostProcessGeneric[[System.__Canon, mscorlib]](System.IO.Stream, System.Func`6<System.String,System.String,System.DateTimeOffset,System.Collections.Generic.IDictiona 
00000011a7ff9580 00007ff90cca3df1 Microsoft.WindowsAzure.Storage.Table.TableQuery`1+<>c__DisplayClassf`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib],[System.__Canon, mscorlib]].<QueryImpl>b__e(Microsoft.WindowsAzure.Storage.Core.Executor.RESTCommand`1<Microsoft.WindowsAzure.Stor 
00000011a7ff95e0 00007ff90caddc69 Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ProcessEndOfRequest[[System.__Canon, mscorlib]](Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState`1<System.__Canon>)                                                                            
00000011a7ff9630 00007ff90cad99e5 Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[[System.__Canon, mscorlib]](Microsoft.WindowsAzure.Storage.Core.Executor.StorageCommandBase`1<System.__Canon>, Microsoft.WindowsAzure.Storage.RetryPolicies.IRetryPolicy, Microsoft.WindowsAz 
00000011a7ff9970 00007ff90cca30ae Microsoft.WindowsAzure.Storage.Table.TableQuery`1+<>c__DisplayClass7[[System.__Canon, mscorlib]].<ExecuteInternal>b__6(Microsoft.WindowsAzure.Storage.IContinuationToken)                                                                                       
00000011a7ff99d0 00007ff90cca2fd8 Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility+<LazyEnumerable>d__0`1[[System.__Canon, mscorlib]].MoveNext()                                                                                                                                            
00000011a7ff9a40 00007ff90cca22f8 MoviePlayer.TableStorage.GetListRangeEntity[[System.__Canon, mscorlib]](System.Collections.Generic.List`1<MoviePlayer.QueryTableStorage>, System.String)                                                                                                        
00000011a7ff9b70 00007ff90cca0612 MoviePlayer.NoSQLData.GetListCategoryTS(MoviePlayer.Video, System.String)                                                                                                                                                                          
00000011a7ffa020 00007ff90cca3ab8 MoviePlayer.NoSQLData.GetCategoryLatest(MoviePlayer.Video, System.String)                                                                                                                                                                            
00000011a7ffa090 00007ff90cafda64 RepSyndWebApplication.Player.Default.Page_Load(System.Object, System.EventArgs)                                                                                                                                                                             
00000011a7ffa640 00007ff962abc0b7 System.Web.UI.Control.LoadRecursive()                                                                                                                                                                                                                           
00000011a7ffa690 00007ff962adcc4a System.Web.UI.Page.ProcessRequestMain(Boolean, Boolean)                                                                                                                                                                                                         
00000011a7ffa750 00007ff962adbec9 System.Web.UI.Page.ProcessRequest(Boolean, Boolean)                                                                                                                                                                                                             
00000011a7ffa7c0 00007ff962adbd27 System.Web.UI.Page.ProcessRequest()                                                                                                                                                                                                                             
00000011a7ffa860 00007ff962ada453 System.Web.UI.Page.ProcessRequest(System.Web.HttpContext)                                                                                                                                                                                                       
00000011a7ffa8b0 00007ff962ae4b61 System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()                                                                                                                                                         
00000011a7ffa990 00007ff962aabee5 System.Web.HttpApplication.ExecuteStep(IExecutionStep, Boolean ByRef)                                                                                                                                                                                           
00000011a7ffaa30 00007ff962ac954a System.Web.HttpApplication+PipelineStepManager.ResumeSteps(System.Exception)                                                                                                                                                                                    
00000011a7ffab80 00007ff962aac0f3 System.Web.HttpApplication.BeginProcessRequestNotification(System.Web.HttpContext, System.AsyncCallback)                                                                                                                                                        
00000011a7ffabd0 00007ff962aa613e System.Web.HttpRuntime.ProcessRequestNotificationPrivate(System.Web.Hosting.IIS7WorkerRequest, System.Web.HttpContext)                                                                                                                                          
00000011a7ffac70 00007ff962aaefb1 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffae80 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffaed0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffb6e8 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffb6e8 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffb6c0 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffb790 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffb9a0 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffb9f0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffda48 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffda48 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffda20 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffdaf0 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffdd00 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffdd50 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffec98 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffec98 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffec70 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffed40 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffef50 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffefa0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7fff1a8 0000000000000000 ContextTransitionFrame

The common function in all the call stacks  is

MoviePlayer.TableStorage.GetListRangeEntity[[System.__Canon, mscorlib]](System.Collections.Generic.List`1<MoviePlayer.QueryTableStorage>, System.String)

It is executing the following code

public System.Collections.Generic.List<T> GetListRangeEntity<T>(System.Collections.Generic.List<MoviePlayer.QueryTableStorage> queue, string query)
{
  string string1 = "";
  if (!string.IsNullOrWhiteSpace(query)) goto lab1;
  if (string.IsNullOrWhiteSpace(query))
  {
    if (que != null)
    {
      System.Collections.Generic.List<MoviePlayer.QueryTableStorage>.Enumerator enumerator1 = queue.GetEnumerator();
      try
      {
        while (enumerator1.MoveNext())
        {
          MoviePlayer.QueryTableStorage storage1 = enumerator1.Current;
          switch (storage1.TypeTS)
          {
            case 0: goto lab2;
            case 1: goto lab3;
            case 2: goto lab4;
            case 3: goto lab5;
            case 4: goto lab6;
            case 5: goto lab7;
            case 6: goto lab8;
            case 7: goto lab9;
          }
          goto lab10;
        lab2:
          string1 = string.Concat(string1, Microsoft.WindowsAzure.Storage.Table.TableQuery.GenerateFilterCondition(storage1.KeyTS, storage1.OperationTS, storage1.ValueTS.ToString()));
          goto lab10;
        lab3:
          
   <SNIPPED>

The collection passed is as below.

0000000f58a9f1f0 System.Collections.Generic.List`1[[MoviePlayer.CategoryList, MoviePlayerDistList]]

The collection is as follows and contains 726 objects

MT                           Field           Offset        Type                 VT     Attr           Value                          Name

00007ff96af01250     4000cd1       8              System.Object[]   0      instance     0000000f58b9a7c8      _items

00007ff96af037c8     4000cd2       18             System.Int32       1      instance                           726      _size

00007ff96af037c8     4000cd3       1c             System.Int32       1      instance                           726      _version

00007ff96af011b8    4000cd4       10             System.Object     0      instance     0000000000000000    _syncRoot

Looking at the size of this object.

sizeof(0000000f58b40940) = 137048 (0x21758) bytes (MoviePlayer.CategoryList) 

This object MoviePlayer.CategoryList size is greater than 85,000 bytes. Any object greater than 85,000 bytes will not get allocated in the normal SOH heap but will go to the LOH (Large Object Heap). Details around LOH and GC can be found in the articles

http://msdn.microsoft.com/en-us/library/ee787088.aspx

http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

If I look at the process uptime 1:45:31.000 = 6331 seconds in the first dump. If I look at the number of time GC has run it’s very high for Gen 2. It’s almost like Gen 2 collection is attempted in every two seconds.

.NET CLR Memory

Counter                               Value

===============     ==============

Bytes in All Heaps              84,495,440

GCHandles                        1,514

GEN 0 Collections              55,143

GEN 1 Collections              13,746

GEN 2 Collections             3,463

# Induced GCs                   0

# of Pinned Objects           2

Sync Blocks in use              121

Finalization Survivors          0

Total Commited Bytes        502,095,872

Total Reserved Bytes          18,253,578,240

GEN 0 Heap Size                26,423,840

GEN 1 Heap Size                2,989,384

GEN 2 Heap Size                52,896,880

LOH Size                            2,185,336

% Time in GC                     7.90%

So the Action Plan for this issue was to reduce the size of the object for MoviePlayer.CategoryList. Since most outside developer support engineer roles are not familiar with post mortem analysis the following could be used to find the size of the object using .Net or Visual Studio

http://stackoverflow.com/questions/324053/find-out-the-size-of-a-net-object

 After implementing the suggestions the CPU grows in linear fashion with load and not exponential. The CPU stopped hitting >90% and staying there, so there was no need to spawn additional instances of the role to server users.

Hope this article helps in understanding one of the fundamental causes of frequent GC leading to high CPU. It’s not just Azure but could happen on premises application as well.

Regards,

Angshuman Nayak

Cloud Integration Engineer

Not Able to Delete Storage Account – Ensure these image(s) and/or disk(s) are removed before deleting this storage account

$
0
0

 

While deleting an Azure Storage Account you might come across the following error.

Storage account portalvhds9x8ddnOgp9tn2 has some active image(s) and/or disk(s), e.g. annayakNE-annayakNE-O-201410240936090519. Ensure these image(s) and/or disk(s) are removed before deleting this storage account.

SCENARIO 1 – DISKS

Image1

A storage account can’t be deleted if it has VHDs that are attached as disk. These disks are created when creating Azure IaaS VM and you might have deleted the VMs but the disks are still around with a lease on the VHD located in this storage account.

Below steps which will help you the delete all VHD blobs and storage containers for your account.

Please follow the below steps :

1. Delete the necessary VM which has a lease on this VHD (If not already deleted).

2. Delete the associated disks/images, while deleting these, please ensure to select ”Delete the associated VHD“. You could also delete the VHDs manually.

clip_image004

clip_image006

clip_image008

3. Once the associated VHD’s are deleted, you will be able to delete the storage account.

clip_image010

Image6

SCENARIO 2 – IMAGES

Login to Azure portal.
Navigate to Virtual Machine -> Images.

clip_image001[4]

Select the image : Annayak-1-8-1-0-1-Ubuntu-12-10.
Delete the image.
You can chose to delete the associated VHD.
After deleting the VHD, you should be able to delete the storage account.

Hope this helps you delete your storage accounts when you get the error “Storage account portalvhds9x8ddnOgp9tn2 has some active image(s) and/or disk(s), e.g. annayakNE-annayakNE-O-201410240936090519. Ensure these image(s) and/or disk(s) are removed before deleting this storage account”.

 

 

Regards,
Angshuman Nayak
Cloud Integration Engineering


Azure Storage Queue – Randomly Getting 403 Forbidden on delete of queue message with REST API

$
0
0

 

While developing an application that reads and deletes messages on an Azure storage Queue using the REST API (not the Azure Storage libraries), some requests (but not all) to delete a message are returned by a 403 error and the message is not deleted. This does not happen all the time. In many cases it works fine but it seems to randomly fail for a few requests.

The remote server returned an error: (403) Forbidden.AuthenticationFailed. Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. The MAC signature found in the HTTP request ‘ALWhzP+84PAKpkQLpDj8Sl4MtnGkla3P0WjLkRaPDl4=’ is not the same as any computed signature.

So I took a fiddler to log the request and response.

This delete worked!

GETTING THE  MESSAGE  FROM  THE  QUEUE

Request

GET http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages HTTP/1.1
x-ms-date: Mon, 23 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:gPlR4ol9dgBPfW9B/KQ9jKdSLZP8lakXKGQL73/xNQf=
Accept: application/atom+xml,application/xml
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: application/xml
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: gp9a9lfq-0007-0056-9k3q-9003l8000000
x-ms-version: 2009-09-19
Date: Mon, 23 Feb 2015 16:25:01 GMT
 
<?xml version="1.0" encoding="utf-8"?>
<QueueMessagesList>
<QueueMessage>
<MessageId>4g8ap7be-573k-6o9d-97ct-4k73k935gldk </MessageId>
<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>
<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>
<DequeueCount>1</DequeueCount>
<PopReceipt> AgAAAAMAAAAAAAAAFFKoV9QS0KF=</PopReceipt>
<TimeNextVisible>Mon, 25 Feb 2015 16:25:31 GMT</TimeNextVisible>
<MessageText> PQPxl9KmQLFaplGzSPQldxUaEL9lqPztDIF=</MessageText>
</QueueMessage>
</QueueMessagesList>

DELETING THE  MESSAGE  FROM  THE  QUEUE

Request

DELETE http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/8e3be9ab-759b-4e0c-88bc-9c67d524bcad?popreceipt=AgAAAAMAAAAAAAAAFFKoV9QS0KF=HTTP/1.1  
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:zelCPqaDnaqGqXi1Eq8+5wpgAPZ0l73xuoC9D3C4k2c=
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 204 No Content
Content-Length: 0
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: gp9a8psq-0007-0056-8l6k-9003l8000000
x-ms-version: 2009-09-19
Date: Mon, 23 Feb 2015 16:25:01 GMT 

 

This Delete Failed!

GETTING THE  MESSAGE  FROM  THE  QUEUE

Request

GET http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages HTTP/1.1 
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:gPlR4ol9dgBPfW9B/KQ9jKdSLZP8lakXKGQL73/xNQf=
Accept: application/atom+xml,application/xml
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: application/xml
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: ge9sk924-0005-0078-843u-8114k9000000
x-ms-version: 2009-09-19
Date: Mon, 25 Feb 2015 16:25:01 GMT
 
<?xml version="1.0" encoding="utf-8"?>
<QueueMessagesList>
<QueueMessage>
<MessageId>8dvk9450-g8dk-6932-4j83-3429rslw8l2a</MessageId>
<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>
<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>
<DequeueCount>1</DequeueCount>
<PopReceipt>AgAAAAMAAAAAAAAADQ+uV9QS0KF=</PopReceipt>
<TimeNextVisible>Mon, 23 Feb 2015 16:25:31 GMT</TimeNextVisible>
<MessageText>YULzc2PaPAPbpwTkQgLsyxEmDL4laWaLWdP=</MessageText>
</QueueMessage>
</QueueMessagesList>
 

DELETING THE  MESSAGE  FROM  THE  QUEUE

Request

DELETE http://annayakstorage.queue.core.windows.net/plccommandsqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c?popreceipt=AgAAAAMAAAAAAAAADQ+uV9QS0KF=HTTP/1.1 
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:QKWlaP+39WLQalWPaKd7Ka9MwpAjbh9Q9GaDlPxAFl9=
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Content-Length: 783
Content-Type: application/xml
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: ge9sk924-0005-0093-843u-8114k9000000
Date: Mon, 25 Feb 2015 16:25:01 GMT

Error

<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId: ge9sk924-0005-0093-843u-8114k9000000
 
Time:2015-02-25T16:25:01.8486841Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request ‘QKWlaP+39WLQalWPaKd7Ka9MwpAjbh9Q9GaDlPxAFl9=’ is not the same as any computed signature. Server used following string to sign: 
'DELETE 
x-ms-date:Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version:2009-09-19
/annayakstorage/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c
popreceipt:AgAAAAMAAAAAAAAAFF wV4VP0AE='.</AuthenticationErrorDetail></Error>

After spending quite some hours through the traces I noticed that in the failing case the popreceipt is not the same as the one inside the message and hence it gives the error. 

Popreceipt in the request – DELETE http://annayakstorage.queue.core.windows.net/plccommandsqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c?popreceipt= AgAAAAMAAAAAAAAADQ+uV9QS0KF= HTTP/1.1

Popreceipt in the error response – popreceipt : AgAAAAMAAAAAAAAADQ uV9QS0KF=

If you notice the +is gone. In all the working cases I see the popreceipt didn’t have a +’. So whenever the message has a popreceipt with a +as below it was failing.

<?xml version="1.0" encoding="utf-8"?>

<QueueMessagesList>

<QueueMessage>

<MessageId>8dvk9450-g8dk-6932-4j83-3429rslw8l2a</MessageId>

<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>

<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>

<DequeueCount>1</DequeueCount>

<PopReceipt>AgAAAAMAAAAAAAAADQ+uV9QS0KF=</PopReceipt>

<TimeNextVisible>Mon, 25 Feb 2015 16:25:31 GMT</TimeNextVisible>

<MessageText>YULzc2PaPAPbpwTkQgLsyxEmDL4laWaLWdP=</MessageText>

</QueueMessage>

</QueueMessagesList>

As per the standard reserved characters need to be URL encoded when transmitted over the internet. http://en.wikipedia.org/wiki/Percent-encoding

So I used the .Net class WebUtility (https://msdn.microsoft.com/en-us/library/zttxte6w(v=vs.110).aspx) and URL encoded the parameters like popreceipt.

The following changes were made to the code to encode the special character. 

String urlPath = String.Format("{0}/messages/{1}?popreceipt={2}", 
WebUtility.UrlEncode(queueName), WebUtility.UrlEncode(messageid),
WebUtility.UrlEncode(popreceipt));

It started working fine after that and the deletes don’t fail randomly anymore.

Regards,

Angshuman Nayak

Cloud Integration Engineer

The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints

$
0
0

 

While trying to deploy a D-Series IaaS VM (could happen for A 8/A9 IaaS VMs as well) from the Azure Portal or PowerShell you may get the following error. It could also happen if you try to deploy or re-deploy a PaaS Cloud Service after increasing the VM size in the configuration to a Virtual Network(VNET) .

Current Portal

image

New Portal

image

Error:

The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region. The long running operation tracking ID was: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. 

Please check if you are trying to deploy this IaaS VM to a Cloud Service that is part of an existing Virtual Network. The cause of the failure is that existing VNET (Virtual Networks) are attached to an Affinity Group. Affinity Group is bound to a set of servers.

Deployment Configuration for Local Virtual Network

<VirtualNetworkSitename=“VNetLocal” AffinityGroup= “VNetLocalAffinity”>

So you try the following options

a) Deploy this IaaS VM outside the VNET.

b) If it’s required to have this VM in the same VNET e.g. in a situation where it need to be part of the existing solution, then you need to convert the VNET from a Local Virtual Network to a Regional Virtual Network.

Details – http://azure.microsoft.com/blog/2014/05/14/regional-virtual-networks/

Regards,
Angshuman Nayak
Cloud Integration Engineer

Installing DebugDiag and importing rules thru Azure Cloud Services startup tasks

$
0
0

This article describes the steps for how to install DebugDiag version 2 update 1 on Cloud Services Web and Worker Roles using startup tasks.

Note: In this article, the steps were applied to a Worker Role, but it also works for Web Roles.

Preparing the DebugDiag installer and the configuration file with the files

  1. Download DebugDiag Debug Diagnostic Tool v2 Update 1 (The procedure was made with version x64 of DebugDiag 2 Update 1, but it was also tested with version 1.2 and it works fine) 
  2. Install DebugDiag on a machine where you can create the rule the way you want, and after created and activated, you can click on the Export button in right bottom side of the tool and Export to file named “DebugDiagRule.ddconfig”. For information about creating rules in DebugDiag, see Configuring DebugDiag to Automatically Capture a Full User Dump on a Managed Function:

Note: For Azure PaaS VMs environment, it’s highly recommended that user files, such as the Dumps in this case, are generated in C: drive which is the “user” drive and not in the D: drive (system) or E drive (Application drive). You can set up the “Userdump Path” to C:\DebugDiagDumps in the rule creation, or you can edit the exported file (see more information on the step 3).

 

 

       3.  In order make sure that the dump will be generated on the C: drive of your Cloud Service instance, using notepad, you can open the “DebugDiagRule.ddconfig” file with that you created in the previous step, and look for “DumpPath” and make sure it’s set to “C:\DebugDiagDumps”. See example:

 

<DebugDiag MaxClrExceptionDetailsPerSecond="30" MaxClrExceptionStacksPerSecond="20" MaxClrExceptionStacksPerExceptionType="10" MaxClrExceptionStacksTotal="100" MaxNativeExceptionStacksPerSecond="30"><Rules><Rule TargetType="PROCESS" TargetName="WaWorkerHost.exe" UFEActionType="" UFEActionLimit="0" MaxDumpLimit="10" MatchingLeakRuleKey="" PageheapType="" RuleType="CRASH" Active="TRUE" RuleName="Crash rule for all instances of WaWorkerHost.exe" DumpPath="C:\DebugDiagDumps"><Exceptions><Exception ExceptionCode="E0434352" ExceptionName="CLR (.NET) 4.0 Exception - System.NullReferenceException" ExceptionData="System.NullReferenceException" ExceptionData2="" ExceptionDataCheck="FALSE" ActionType="FULLDUMP" ActionLimit="3"/></Exceptions><Events/><Breakpoints/></Rule></Rules></DebugDiag>

 

Note: If the directory set in the DumpPath does not exist in the machine where the rule will be imported, DebugDiag will create it.

       4.  For a Role (Web or Worker Role)

    1. In Solution Explorer, under Roles in the cloud service project right click on your role and select Add>New Folder. Create a folder named Startup
    2. Right click on the Startup folder and select Add>Existing Item. Select the DebugDiag installer and the DebugDiag configuration file and add them to the Startup folder.

 

 

Define startup tasks for your roles

Startup tasks allow you to perform operations before a role starts. In this case, we will use a startup task for installing DebugDiag and another task for importing the configuration file containing the rule exported previously. For more information on startup tasks see: Run Startup Tasks in Azure.

  1. Add the following to the ServiceDefinition.csdef file under the WebRole or WorkerRole node for all roles:
<Startup>
<Task commandLine="Startup\Installer.cmd" executionContext="elevated" taskType="simple"/>
<Task commandLine="Startup\ImportDebugConfig.cmd" executionContext="elevated" taskType="simple"/>
</Startup>

The above configuration will run the console command Install.cmd and ImportDebugConfig.cmd with administrator privileges so it can install DebugDiag and right after that import the configuration file containing the rule.

        2.  Create the Installer.cmd file with the following content:

 if not exist "%ProgramFiles%"\DebugDiag\ msiexec /i %~dp0DebugDiagx64.msi /qn

 

The Installer script will first check if the DebugDiag folder exists, if not, it will install the DebugDiagx64.msi in silent mode.

Note: If you uninstall DebugDiag manually, the DebugDiag Folder will still exist inside Program Files folder, so the Installer will not install DebugDiag since the folder exists. This article is intended to have DebugDiag installed and with the imported rules running and activated again in case of a VM reimage, new instances etc.

       3.  Create the ImportDebugConfig.cmd file with the following content:

 "%ProgramFiles%\DebugDiag\DebugDiag.Collection.exe" /importConfig %~dp0DebugDiagRule.ddconfig –DoNotPrompt

 

The ImportDebugConfig script will import the configuration file. After that, the rule will be created and activated.

In case DebugDiag already has the rule imported (same name), this command will overwrite it.

 

NOTE:

Use a simple text editor like notepad to create this file. If you use Visual Studio to create a text file and then rename it to ‘.cmd’ the file may still contain a UTF-8 Byte Order Mark and running the first line of the script will result in an error. If you were to use Visual Studio to create the file leave add a REM (Remark) to the first line of the file so that it is ignored when run.

 

       4.  Add the Installer.cmd and ImportDebugConfig.cmd files to the roles by right click on the Startup folder inside the role and selecting Add>Existing Item. So the roles should now have the files DebugDiagRule.ddconfig, DebugDiagx64.msi, Installer.cmd and
ImportDebugConfig.cmd:

 

Deploying your service

When you deploy your service, the startup tasks will run, install DebugDiag and import the config file containing the rule. The installation and configuration of DebugDiag is fast, so after the instance is ready you can RDP to the instance and make sure DebugDiag is installed and have your rule activated.

Cloud Services roles recycling with the error “System.IO.FileLoadException: Could not load file or assembly”

$
0
0

You may be facing an issue where after a deploy, your Cloud Service role gets stuck in “starting” or “recycling” states. In this case, as the initial troubleshooting steps, we have to remote access the instance, start checking the logs and try to find out evidences about what can be causing the issue. For an excellent guidance about what logs to look, please refer to this excellent Kevin Williamson’s article.
If you get into the same situation above, one of the common causes may be the following exception:

 
1044
WaIISHost
Role entrypoint could not be created: System.TypeLoadException: Unable to load the role entry point due to the following exceptions:
– System.IO.FileLoadException: Could not load file or assembly ‘System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
File name: ‘System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′
 
WRN: Assembly binding logging is turned OFF.
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.
Note: There is some performance penalty associated with assembly bind failure logging.
To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].
 
—> System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.
   at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module)
   at System.Reflection.RuntimeModule.GetTypes()
   at System.Reflection.Assembly.GetTypes()
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.GetRoleEntryPoint(Assembly entryPointAssembly)
   — End of inner exception stack trace —
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.GetRoleEntryPoint(Assembly entryPointAssembly)
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CreateRoleEntryPoint(RoleType roleTypeEnum)
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeRoleInternal(RoleType roleTypeEnum)
 
the message resource is present but the message is not found in the string/message table

 

This exception is usually recorded in the Azure event log (see the next image) and it means that something in your project is referencing a wrong version of the assembly. In this case, it is failing to load the version 3.0.0.0 of System.Web.Mvc which is not the current version of the assembly (the current one in this case is 5.0.0.0) so that is where the exception happens.

 

 

The best way to fix this issue is to fix the wrong references inside your project. However, it can take some time if you are not sure what exactly is making the wrong references. In this case, the faster way is to use the bindingRedirect in the configuration files.
Usually, when a new assembly is added to your project, Visual Studio will automatically create a bindingRedirect entry in your web.config (Web Role) or app.config (Worker Role) just to avoid the wrong assembly version issue.

 

 

However, in Azure Cloud Services, the assembly bindings from web.config and app.config does not have effect, due to the fact that WaIISHost (Web Role) and WaWorkerHost (Worker Role) are not able to read these two configuration files, instead, they read the <role name>.dll.config file, and this is the file where the assembly binding configuration need to be. Please, refer to this article for more details.
The problem is, the <role name>.dll.config file is not added to Solution by default, and even if it is there, it may happen that it does not have the assembly binding configuration like in web.config or app.config.

 

Solution:

1) Open the <role name>.dll.config located in your project bin folder.
2) Check if there is the BindingRedirect entry that you need. If not, follow one of the two options below:
     a) Copy the web.config or app.config content (considering one of these two configuration files has the information that you need) and paste it into the <role name>.dll.config file.
     b) Manually create an Assembly Binding entry:
 

<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json" culture="neutral" publicKeyToken="30ad4fe6b2a6aeed" />
<bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" />
</dependentAssembly>
</assemblyBinding>
</runtime>

NOTE: In order to discover the publicKeyToken, execute the following PowerShell command:

PS C:\Windows\Syetem32>([system.reflection.assembly]::loadfile("dll full path")).FullName

Where “dll Full path” is the dll location path. Example:

PS C:\WINDOWS\system32> ([system.reflection.assembly]::loadfile("C:\logs\Newtonsoft.Json.dll")).FullName

You will have the following output:

Newtonsoft.Json, Version=6.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed

 

3) Add the <role name>.dll.config file to your Solution (same level as the web.config or app.config) and set the Copy to Output Directory property to “Copy Always”.

 

 

4) Redeploy to your Cloud Service.

 

NOTE: As a quick test in case you are not able to make a new deploy, you can copy the <role name>dll.config file to the bin folder of your package into the instance in Azure (<application drive>:\approot\bin) and wait some minutes until the WaHostBootstrapper.exe process restart the WaIISHost.exe or WaWorkerHost.exe processes then the role will initiate normally. However, do not forget to redeploy, since all the manual change inside the Cloud Services instances will be lost in some time.

 

Automatically flushing DNS in Azure PaaS Cloud Services Instances

$
0
0

I have worked on a case where because a specific reason, it was needed to Flush DNS from the PaaS Cloud Service Instances each 8 hours. And this is completely possible, however, since we are talking about PaaS Cloud Services and we already know we can’t apply manual changes since the PaaS Instances are stateless, we will have to use Start up tasks and Windows Task Scheduler to get this done. Please see the following steps:

 

You will need to:

 1) Create a cmd file named “flushdns.cmd” with the flush DNS command (ipconfig /flushdns) and any other that you want or need (this cmd file will be used by Task Scheduler to flush DNS)

 2) Create another cmd file named “task-flushdns.cmd” and put a task scheduler command that set Task Scheduler to run the flushdns.cmd command file every 8 hours

Command:

Schtasks /create /tn FlushDNS /tr E:\approot\Startup\flushdns.cmd /sc hourly /mo 8 /ru System

 

            Command Details:

            a) This is the part of the command where the flushdns.cmd file is called “E:\approot\Startup\flushdns.cmd”

            b) This command is set to be executed with System Account in “/rn System

            c) After following the step 3, this is the location where the flushdns.cmd will be “E:\approot\Startup\”

            d) More details about setting Task Scheduler by command lines here.

 

 3) For a Role (Web or Worker Role)

    1. In Solution Explorer, under Roles in the cloud service project right click on your role and select Add>New Folder. Create a folder named Startup
    2. Right click on the Startup folder and select Add>Existing Item. Select the flushdns.cmd and task-flushdns.cmd files and add them to the Startup folder.

 

 

 4) Create a startup task in the ServiceDefinition.csdef: Now, you will have to create the startup task itself, and for this, you will need to Add the following to the ServiceDefinition.csdef file under the WebRole or WorkerRole node. For more information on startup tasks see: Run
Startup Tasks in Azure
.

 

 

 <Startup>

<Task commandLine="task-flushdns.cmd" executionContext="elevated" taskType="simple" />

</Startup>

 

Note: The above configuration will run the task-flushdns.cmd file which will configure Task Scheduler to run the flushdns.cmd command file, and re-run it after each 8 hours.

 

 5) Redeploy

 

Sources:

https://technet.microsoft.com/en-us/library/cc781949(v=ws.10).aspx

https://technet.microsoft.com/en-us/library/cc772785(v=ws.10).aspx

https://msdn.microsoft.com/library/azure/hh180155.aspx

 

 

 

 

Webhooks for Azure Alerts – Creating a sample ASP.NET receiver application

$
0
0

Microsoft Azure recently announced support for webhooks on Azure Alerts. Now you can provide an https endpoints to receive webhooks while creating an alert in the Azure portal.

Webhooks are user defined HTTP endpoints that are usually triggered by an event. Webhooks allow us to get more out of Azure Alerts. You can specify a HTTP or HTTPS endpoint as a webhook while creating or updating an alert on the Azure Portal.

In this article I will walk you through creating an sample application to receive webhooks from Azure Alerts, configure an Alert to use this endpoint and test the overall flow.

Create a Receiver Application

Open Visual Studio 2015 and create a New ASP.Net Web Application

 

[Figure  1]

Select the Empty template from the available ASP.Net 4.5 Templates and Check to add the Web API folders an core references as below.

[Figure 2]

Add the Microsoft.AspNet.WebHooks.Receivers.Azure Nuget package. Don’t forget to check Include prerelease if you can find this package in the search results.

[Figure 3]

After installing the nuget package add the the below line to the Register method in WebApiConfig class.

config.InitializeReceiveAzureAlertWebHooks();

You can add the above code after the routing code as shown in Figure 4.

[Figure 4]

This code registers your webhooks reciever. 

Next step is to add the below application setting to your web.config file. This setting adds the secretkey to validate that the WebHook requests indeed Azure Alerts. It is advisable to use a SHA256 hash or similar value, which you can get from FreeFormatter Online Tools For Developers. This secret key will be part of the Reciever URL provided in the Azure Portal while creating the Azure Alerts.

<appSettings>
<add key="MS_WebHookReceiverSecret_AzureAlert" value="d3a0f7968f7ded184194f848512c58c7f44cde25" />
</appSettings>

Next we need to add handlers to process the webhooks data sent by Azure Alerts.

Add a new class AzureAlertsWebHooksDataHandler and add the below code to it.

using System.Threading.Tasks;
using Microsoft.AspNet.WebHooks;
namespace MyWebhooksDemo1.App_Code
{
public class AzureAlertsWebHooksDataHandler : WebHookHandler
{
public AzureAlertsWebHooksDataHandler()
{
Receiver = "azurealert";
}

public override Task ExecuteAsync(string generator, WebHookHandlerContext context)
{
// Convert to POCO type
AzureAlertNotification notification = context.GetDataOrDefault<AzureAlertNotification>();

// Get the notification name
string name = notification.Context.Name + " -- " 
+ notification.Context.Timestamp.ToFileTime().ToString();

return Task.FromResult(true);
}
}
}

This is the most basic handler. In the construct we have initialized the Reciever to handle only Azure Alert webhooks. The ExecuteAsync method is the one which is responsible for processing the data posted and return response back to indicate webhooks was received.

We will now expand this code to actually process the data received in the webhooks. Let’s store the data posted by the Azure Alerts webhooks sender in Azure table storage.

To do this first add the WindowsAzure.Storage nuget package and add the below code to import the Windows azure storage namespaces required here.

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Table;
using System.Configuration;   //To read connectionstring from the config files.

Also add your Azure storage connection string in the application settings as below.

  <add key=”StorageConnectionString” value=”DefaultEndpointsProtocol=https;AccountName=your-account-name;AccountKey=your-account-key” />

And add the a small TableEntity implementation as below to store data in Azure table storage.

public class WHEntity : TableEntity
{
        public WHEntity(string Receiver, string Name)
        {
            this.PartitionKey = Receiver;
            this.RowKey = Name;
       
        public WHEntity() { }
        public string FullData { get; set; }

Finally lets modify the ExecuteAsync method to process the data send by the Webhooks sender and store it in Azure Table storage as below.

public override Task ExecuteAsync(string generator, WebHookHandlerContext context)

{

            // Convert to POCO type

            AzureAlertNotification notification = context.GetDataOrDefault<AzureAlertNotification>();

            // Get the notification name

            string name = notification.Context.Name + ” — ”

+ notification.Context.Timestamp.ToFileTime().ToString();

            WHEntity wHEntity1 = new WHEntity(this.Receiver, name);

            wHEntity1.FullData = context.Data.ToString();

            // Retrieve the storage account from the connection string.

            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(

                        ConfigurationManager.AppSettings[“StorageConnectionString”]);

            // Create the table client.

            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();

           

            CloudTable table = tableClient.GetTableReference(“azurealertdemo”);

            table.CreateIfNotExists();                    

            TableOperation insertOperation = TableOperation.InsertOrReplace(wHEntity1);

            table.Execute(insertOperation);

            return Task.FromResult(true);

}

The data sent in by the WebHooks sender is stored in JSON format, in the Data field of the WebHookHandlerContext object which is passed in as a parameter to the ExecuteAsync method. In the above method, I’m converting it to string and storing in Azure Table storage.

Now publish this code to an Azure Website. After publishing you can use the below URL to configure Azure Alerts to send Webhooks to the receiver we created above.

https://<host>/api/webhooks/incoming/azurealert?code=d3a0f7968f7ded184194f848512c58c7f44cde25

Note:
The Code in the above URL is the same as the secret key we have configured in the application settings.

Configure webhooks for Azure Alerts

Now Log in to the new Azure portal to configure an Azure alert to send Webhooks to the receiver we created above.

Browse and select a resource for which you want to configure the alerts. For simplicity lets create an alert for the above webhooks reciever Azure website we created.

Create a new alert(Webhooks currently supported on metric alerts only), and provide your webhooks reciever URL in the WebHooks field as below.

[Figure  5]

Verify the Results:

Configure the alerts help you verify the results quickly. You can accomplish this by keeping the Threshold and the Period to the minimum. I have set the Period to 5 Minutes in the above example. Hence, after 5 minutes if the threshold is reached, an alert is fired and webhooks posted to our receiver URL. This data is then processed and stored to Azure table storage as below.

[Figure 6]

Sample JSON object posted by the Azure Alerts Webhooks is as below.

{

  “status”: “Resolved”,

  “context”: {

    “id”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/microsoft.insights/alertrules/webhooksdemo”,

    “name”: “webhooksdemo”,

    “description”: “webhooksdemo”,

    “conditionType”: “Metric”,

    “condition”: {

      “metricName”: “Requests”,

      “metricUnit”: “Count”,

      “metricValue”: “1”,

      “threshold”: “1”,

      “windowSize”: “5”,

      “timeAggregation”: “Total”,

      “operator”: “GreaterThan”

    },

    “subscriptionId”: “<your-subscriptionId>”,

    “resourceGroupName”: “webhooksdemo1″,

    “timestamp”: “2015-10-14T09:43:20.264882Z”,

    “resourceName”: “mywebhooksdemo1″,

    “resourceType”: “microsoft.web/sites”,

    “resourceId”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1″,

    “resourceRegion”: “East US”,

    “portalLink”: “https://portal.azure. com/#resource/subscriptions/ <your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1″

  },

  “properties”: {}

}

Alternatively you can also use the Fiddler request composer to post to you webhooks Receiver URL and check the response the corresponding updates in the Azure Table storage. Make sure that the content-type is marked as json and the request body has json similar to the above example. A fiddler request should look like the below example.

[Figure 7]

Note:
Webhooks are internally configured to retry a few times until they receive a successful response from the receiver within a short duration. Hence you might see multiple requests hitting an endpoint in the ExecuteAsync method if you are debugging it remotely.

References:

Receive WebHooks from Azure Alerts and Kudu (Azure Web App Deployment) by Henrik F Nielsen
http://blogs.msdn.com/b/webdev/archive/2015/10/04/receive-webhooks-from-azure-alerts-and-kudu-azure-web-app-deployment.aspx

Introducing Microsoft ASP.NET WebHooks Preview by Henrik F Nielsen
http://blogs.msdn.com/b/webdev/archive/2015/09/04/introducing-microsoft-asp-net-webhooks-preview.aspx

Webhooks for Azure Alerts
https://azure.microsoft.com/en-us/blog/webhooks-for-azure-alerts/

How to configure webhooks for alerts
https://azure.microsoft.com/en-us/documentation/articles/insights-webhooks-alerts/

Error “Access to the path ‘E:sitesrootWeb.config’ is denied” when storing Azure AD’s public key in Web.config of an Azure Cloud Services application.

$
0
0

I have worked on a scenario where a Web Role application which had been working fine for a long time just started throwing the error “Access to the path ‘E:\sitesroot\0\Web.config’ is denied” without any change or update to the deployment:

  


 
  

Looking at the error, it’s a bit clear that for some reason, the Application Pool identity doesn’t have some specific access to web.config file. But if we didn’t make any change to the deployment a few questions start coming into play:

  1. What is the default Application Pool Identity account for a Web Role?
  2. What permission access does this account need now? 
  3. Why just now?

These are very good questions and we I will answer one by one:

1)    For Azure Cloud Services Web Roles, the default Application Pool Identity account is “Network Service”

  
  
  
  

2)    In a normal basis the Application Pool account needs read permission over the web.config file so it can read all the application configuration. However, looking into the Security info for this config file inside the instance, we can see Network Service already has read access.

 

  
So, what else does this account need? In this specific case, after we analyzed this web.config content we found a block which looks like the following:

 <issuerNameRegistry type=”System.IdentityModel.Tokens.ValidatingIssuerNameRegistry,System.IdentityModel.Tokens.ValidatingIssuerNameRegistry”>

  <authority name=”https://sts.windows.net/ec4187af-07da-4f01-b18f-64c2f5abecea/”>

    <keys>

      <add thumbprint=”3A38FA984E8560F19AADC9F86FE9594BB6AD049B” />

    </keys>

Note: The above block was taken from the article Important Information About Signing Key Rollover in Azure AD.

 

This means the application has a code that writes updated Azure AD’s Keys into web.config file and this operation requires NETWORK SERVICE account to have WRITE permission in the web.config file. If you are not familiar with Azure AD’s public Keys, please see Overview of Signing Keys in Azure AD.

Note: It is recommended that your application cache these keys in a database or a configuration file to increase the efficiency of communicating with Azure AD during the sign-in process and to quickly validate a token using a different key.

Now that we know what is causing, we can manually go into the security tab in the web.config properties and manually give write permission to NETWORK SERVICE account and the application will start working again.

 

 

3)    Answering the last question, according to this article the code updates the web.config only when there is a change to the certificates. This is probably the first time the code got executed trying to update the web.config file and ran into the issue.

“Once you have followed these steps, your application’s Web.config will be updated with the latest information from the federation metadata document, including the latest keys. This update will occur every time your application pool recycles in IIS; by default, IIS is set to recycle applications every 29 hours. For more context about this update, see Adding Sign-On to Your Web Application Using Azure AD.”

This explain why just now the application ran into the issue, and we also know a workaround for that. However, we must not forget we are working on Azure PaaS Cloud Services and that it is “stateless” which means this manual change will disappear sometime. So what do we do? In this case, the best thing to do is to create a startup task that gives NETWORK SERVICE account write permission in the application web.config file. Please follow the steps below to get this done.

 

Creating a Startup Task to give write permission to Network Service in the application Web.config file

  

1)    We first need to have the right command line that can get the above task done, and here it is:

 

icacls E:\sitesroot\0\Web.config /grant “NT AUTHORITY\NetworkService”:(w)

 

Note: You can also test the above command inside of the instance and make sure it’s working. To have more context of the command “ICACLS” review here.

 

2)    Create a cmd file named “manageacl.cmd” with the command from step 1 as its content. (you can name it whatever you want, you will use this file name in the next step)

 

3)    Right click in your application project in Visual Studio and choose “Add Existing Item…” and add the manageacl.cmd file created in the previous step.

 

 

Note: Set the “Copy to Output Directory” property of the cmd file to “Copy Always”, otherwise the file will not be copied to the package when you publish it.

 

 

4)    Add the following to the ServiceDefinition.csdef file under the WebRole:

 

<Startup>

      <Task commandLine=”manageacl.cmd” executionContext=”elevated” taskType=”background” />

</Startup>

 

Note: We are using taskType “background” because we need the role to be deployed in order to have the web.config file in the E:\sitesroot\0\ directory. If we use taskType=“Simple” the role will not start until this command to run.

 

5)     Publish

 

After the steps above, you can RDP to your instance and check the security property of the web.config file and you will see that NETWORK SERVICE now has write permission.


Azure Cloud Service package gets automatically deleted after Azure Account gets suspended/disabled

$
0
0

You may get into a situation when you have some kind of issue with your Azure Subscription (e.g. you have reached the spending limit, issues with the credit card and etc.) and your account will get suspended/disabled. Right after this issue gets fixed, you notice your PaaS Cloud Services are all empty, without any deployments and the packages are gone. You may also notice your IaaS VMs are stopped but you are able to simply start them again within seconds. So the big questions are, why are my deployment packages gone and how do I get them back to my Cloud Service? See the following points to get the explanation for these questions:

 

Cloud Services packages are gone:

When Azure Accounts are disabled, by default, all the PaaS Cloud Services deployments packages are also deleted for a few reasons:

  • Since PaaS VMs are stateless and it’s not possible to de-allocate them as we do with IaaS VMs, which means that, by shutting them down won’t prevent customer for being billed by compute hours. Deleting the Deployment packages prevent the disabled accounts to have additional compute hours billed.
  • Only the package (.cspkg) and the its configuration file (.cscfg) are uploaded to Azure and these source files would still be with the developers

 

What would be the Solution?

Given that the package has been uploaded to Azure in the deployment phase, the short term solution is to have the packages uploaded again to the Cloud Services (cspkg and cscfg). This way, Azure will recreate the deployment the same way it was before and the applications will be up again within minutes.

 

What if for some reason I don’t have the packages anymore?

For every deployment made to Cloud Services, Azure stores the related package files (cspkg and cscfg) in an internal Azure Storage Account (where only Azure has access to) for a few days. Given that, we have an internal process for retrieving the packages and send them to the customer’s storage account. For this, you have to open a ticket with Azure Technical Support team and provide the following information:

 

Note: The Deployment ID is the most important information in this process since this is the only way we can identify the related package files internally. The process can’t be completed without this information.

 

In case you don’t know where to find the deployment ID for a deleted Deployment, here are some tips to find it out:

  • Check the Operations History for your Subscription in the Azure Portal and look for some operation made in the Cloud Service you want (staging or production slot) and get the Deployment ID from the operation details. For this login in the Azure Portal (manage.windowsazure.com) go to “Management Services | Operation Logs”

img1

img2

  • In Visual Studio’s Server Explorer, under storage accounts you may find storages accounts associated with your cloud services and in their tables there may be some Log tables that contain the Deployment IDs.
  • Ask for the Engineer who owns the case to look for any Operation ID from the Cloud Service and get the Deployment ID from Azure internal logs.

 

Note: Azure only keeps the Operation History for 90 days. So, any operation before it won’t be found and if there’s no operation in this time range for the Cloud Service we won’t be able to find out the Deployment ID.

 

Understanding CPU metric data from Azure Cloud Services.

$
0
0

In this article we will learn how to interpret the CPU metric in both the Azure Portal as well as in the Windows Azure Diagnostic (WAD) tables and understand the differences between data in WAD tables and in Azure Portal. We have focused on the CPU as an example, but the same information can be used for other metrics as well.

Also, we are start from a point where we consider you have already gone thru How to Monitor Cloud Services and followed the steps.

 

Note: CPU usage as well as Data In, Data Out, Disk Read Throughput, and Disk Write Throughput are all captured by default even without enabling Azure Diagnostics (previously called WAD)

 

Let’s take a look at the following image which shows the Azure Portal Dashboard in the Monitor tab for Cloud Service instance called “WebRole1_IN_0″ and the time zone for the Portal Screenshot is UTC-3.

 

 

CPU-image1

 

If we check in this dashboard and put the mouse pointer over 11:45am (14:45pm UTC) we can see the CPU Percentage [Avg] = 2.13% and over 12:00pm (15:00pm UTC) we can see the CPU Percentage [Avg] = 0.6%:

CPU-image2

 

CPU-image3

 

If we go to the Storage Account which is set in the WAD configuration and check the table “WAD[DeploymentID]PT5MRITable” (This table has performance counter data for 5 minutes aggregation)we see different values for total, minimum and maximum from the same counter (Screenshots from the same timestamp as the two images above respectively):

 

Note: In order to have performance counters data stored in WAD tables inside your Storage Account you must have Diagnostics (WAD) and Verbose monitoring enabled for your Role, otherwise you will only have the minimal metrics (CPU Percentage, Data In, Data Out, Disk Read Throughput, and Disk Write Throughput) available for the Azure Portal Dashboard only. See how to Configure monitoring for cloud services.

CPU-image4

CPU-image5

 

 

Note: Timestamps in WAD tables are about data from between this timestamp and the previous one and in the Portal Dashboard from between the timestamp and the one after.

 

So, what are those data about and why are they different? Let’s analyze the second timestamp mentioned, which is between 12:00pm – 12:05pm UTC-3 (15:00pm – 15:05pm UTC).

 

Analysis:

 

The metric was taken twice in this time range of around 5 minutes (where the lower usage (minimum) of the two collections was 0.11647% and the other collection was the higher (maximum) with 0,116721% usage.

CPU-image6

 

However, in the portal the data is the same, but we see a different presentation of it.

CPU-image7

 

When we put the mouse pointer on any of the graphic points we can see the “percentage usage average” for the next 5 minutes, which means that what we see in the dashboard is a calculation result from performance counter data from role instances in that specific time range. In this case, the CPU had a usage average of 0.6% from 5/5/2016 12:00 PM – 5/5/2016 12:05 PM UTC-3 or 5/5/2016 15:00 PM – 5/5/2016 15:05 PM UTC. See more details from How to Monitor Cloud Services:

 

By default performance counter data from role instances is sampled and transferred from the role instance at 3-minute intervals. When you enable verbose monitoring, the raw performance counter data is aggregated for each role instance and across role instances for each role at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is purged after 10 days.

 

For the bottom part of the “Monitor Tab” we have the following data for counters:

 

  • Name: Name of the Metric
  • Source: Where the Metric is being taken from.
  • Min: The Minimum usage average percentage (the lower value) for the whole dashboard period being presented. In this case “1 Hour”
  • Max: The Maximum usage average percentage (the higher value) for the whole dashboard period being presented. In this case “1 Hour”
  • AVG: The Average usage percentage for the whole dashboard period being presented. In this case “1 Hour”
  • TOTAL: The total value for the whole dashboard period being presented (available for some metrics only). In this case “1 Hour”
  • Alert: If you have any alert created for the specific Metric.

CPU-image8

 

Conclusion: We are able to see the CPU metric as the examples above, in the Azure Dashboard as well as in WAD tables in the Storage Account (if Monitoring set to “Verbose”), however, the Metric data in WAD tables is about pictures of the performance counter data from the role and aggregated intervals of 5 minutes, 1 hour, and 12 hours, in the other hand, data in the Azure Portal Dashboard is about the same data, calculated and presented as average. So, both come from the same place, however, they are presented in different ways.

 

Source:

https://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-monitor/

Azure Emulator Crash with error 0x800700b7: Cannot create a file when that file already exists

$
0
0

Sometimes when you are using Visual Studio and working on some Azure projects, you might hit an issue which cause your Azure emulator to crash.

When that happens, you will get the System.Runtime.InteropServices.COMException (0x800700B7): Cannot create a file when that file already exists. (Exception from HRESULT: 0x800700B7)

Error

You can try also manually to run the compute emulator instead of use Visual Studio, the command you need to run is like the next one below:

csrun /devfabric /usefullemulator

And from the command window, you can see that the compute emulator is started.

csrun1

However, using same csrun command, you can check the current status of the emulator, by running the command csrun /status, and you will see that the emulator is not running.

csrun2

You can check the DFService.log file that is generated by the emulator, those logs are located in next folder path:

C:\Users\<user>\ appdata\local\dftmp\DFServiceLogs

In the DFService log file, you can see there the same exception that is reported by Visual Studio (while trying to run the emulator) or by running the csrun command (to run manually the emulator.

 

DFService Information: 0 : [00003520:00000001, 2016/02/17 23:36:54.436]==============================================================================================================================

DFService Information: 0 : [00003520:00000001, 2016/04/17 23:36:54.436] Started: “C:\Program Files\Microsoft SDKs\Azure\Emulator\devfabric\DFService.exe” -sp “C:\Users\YYY\AppData\Local\dftmp” -enableIIS -singleInstance -elevated

DFService Information: 0 : [00003520:00000001, 2016/04/17 23:36:54.482] Exception:System.Runtime.InteropServices.COMException (0x800700B7): Cannot create a file when that file already exists. (Exception from HRESULT: 0x800700B7)

at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)

at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode, IntPtr errorInfo)

at Microsoft.WindowsAzure.GuestAgent.EmulatorRuntime.EmulatorRuntimeImpl.Initialize(String runtimeConfigIniFile, String serviceName, String rootPath, String logFilePath)

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.InitializeEmulatorRuntime()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.InitializeRuntimeAgents()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.Initialize()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Program.Main(String[] args)

 

Now, the issue has been identified, how to mitigate it?

First, I suggest you to check the next great support blog:

https://blogs.technet.microsoft.com/supportingwindows/2014/08/11/wmi-missing-or-failing-wmi-providers-or-invalid-wmi-class

Then after you have checked the blog post, you need to check which one is the missing or failing WMI class, by following the next steps:

  1. Go to start-run and type in wmimgmt.msc
  2. Right click on Local WMI Control (Local) and select properties.
  3. On the general tab, if there are any failures noted on that box, that indicates a core WMI issue.
  4. Found the .MOF files for Win32_Processor namespace/class

For this case, I saw that there were some WMI invalid classes:

  • Win32_Processor
  • Win32_WMISetting

wmierror

  1. Repair the MOF file by running mofcomp.exe <MOFFilename.MOF>. The mofcomp.exe is located in the C:\Windows\System32\wbem folder.
  2. And then re-register the associated DLL by running the command regsvr32 <MOFFilename.dll>

fixIssue

  1. Verify if it is fixed or not by checking the WMI Control (wmimgmt.msc) again. This time, as you can see in the image below, there are no more WMI class erros.

wmifixed

  1. Then, re-launch the Emulator, and this time you will see the emulator to run again, with no issues/crash this time.

I want to thanks Wayne for his great and deep knowledge for Visual Studio.

You can now still enjoying Azure !!

 

Using blob snapshots with PowerShell

$
0
0

Sometime ago, I had a customer who asked me the way to create a blob snapshot of an Azure VM. The process for create a blob snapshot was clear for the customer and you can read the next blog as a reference:

https://azure.microsoft.com/en-us/documentation/articles/storage-powershell-guide-full/#how-to-manage-azure-blob-snapshots

As you read in the article, you can create, list, copy and delete a blob snapshot, but how can you get a particular snapshot that was taken before and not the one you just created?

First, please read the next reference blog to have in mind some consideration of the blob snapshots.

https://azure.microsoft.com/en-us/documentation/articles/storage-blob-snapshots

Now, let’s go to the code.

NOTE: Check that you have PowerShell already installed, details here.

Login to your Azure Subscription.

=============================================

  1. Open a PowerShell session with Admin rights.
  2. Run the command : Login-AzureRmAccount
  3. Provide the credentials that you usually use to Login to the Azure Portal.

 

To create a snapshot.

=============================================

  1. Define the storage account and the context.
1
2
3
4
$StorageAccountName = "yourstorageaccount"
$StorageAccountKey = "Storage key for yourstorageaccount ends with =="
$ContainerName = "yourcontainername"
$BlobName = "yourblobname"
  1. Create the context.
1
$Ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey
  1. Get a reference to a blob.
1
$blob = Get-AzureStorageBlob -Context $Ctx -Container $ContainerName -Blob $BlobName
  1. Create a blob snapshot.
1
$snap = $blob.ICloudBlob.CreateSnapshot()

 

With this procedure you can create a snapshot at any time for the $BlobName defined.

Consider that the name will be the same as the original blob, but at the time the blob snapshot is created, there is a snapshotTime property that includes the precise date/time when the blob snapshot was created. We will use the snapshotTime property to determinate and select later, the blob snapshot that you want to copy and/or to be used for you to promote.

 

Retrieve the blob snapshot list.

=============================================

  1. List the blob snapshots.
1
$ListBlob = Get-AzureStorageBlob Context $Ctx -Prefix $BlobName -Container $ContainerName | Where-Object {$_.ICloudBlob.IsSnapshot -and $_.Name -eq $BlobName -and $_.SnapshotTime -ne $null }

 

The above command will provide all the blob snapshots associated with the $BlobName.

If we check the $ListBlob variable, it will show us the list of blob snapshots, and in one of the columns, we can see the snapshotTime column displaying the time the snapshot was taken (this is in UTC Time zone).

listSnapshots

 

Copy a particular/specific blob snapshot.

=============================================

  1. Define the variables.
1
2
$DestContainerName = "yourdestcontainername"
$DestBlobName = "CopyBlobName"
  1. Create a table to list the blob snapshots you have (this will let you select the item and the snapshotTime property from this table, so later we can get the snapshot based on the snapshotTime property).
1
$ListBlob | Format-Table -AutoSize

In the image below, you can see all the blob snapshots have the name “juliocotest2016123225918.vhd” and the way you can identify when the blob snapshot was taken is by checking the snapshotTime property.

listSnapshots2

  1. Using the table above, you can select the blob snapshot you want to be copied by using the snapshotTime property, so, let’s said you want to copy the snapshot that was taken last “2/25/2016 6:01:30 AM +00:00”. So, you will use and select the SnapshotTime property for that item “3” (Note: Consider that “Item” is the item from the table to be selected based on the SnapshotTime property that you want to select/copy).
1
$SnapshotTime = $ListBlob[Item] | select -ExpandProperty SnapshotTime
  1. Now, select the blob snapshot you want to restore based on the snapshotTime property we got from previous step.
1
$RestorePoint = $ListBlob | where { $_.SnapshotTime -eq $SnapshotTime }
  1. Now, let’s convert the blob selected to a CloudBlob type.
1
$snapshot = [Microsoft.WindowsAzure.Storage.Blob.CloudBlob] $RestorePoint[0].ICloudBlob
  1. Finally, you can now copy the blob snapshot to another container.
1
Start-AzureStorageBlobCopy Context $Ctx -ICloudBlob $snapshot -DestBlob $DestBlobName -DestContainer $DestContainerName

As you can see in the image below, I was able to copy that particular snapshot into my “snapshots” container.

listSnapshot3

 

Now, I used Azure Storage Explorer, to check the blob.

Here is a screenshot showing the blob I had before copied the blob snapshot, in my “snapshots” container.

explorer1

And in the next screenshot you can see the “recovery.vhd” page blob that was copied from the snapshot list.

explorer2

Now, just to end this blog, I would like to show you the way to delete a series of blob snapshots based on the time they were taken (yes, using the snapshotTime property). To do that, you need to list again the blob snapshots you have.

 

Retrieve the snapshots list.

=============================================

  1. List the snapshots of a blob.
1
2
$blob = Get-AzureStorageContainer -Context $Ctx -Name $ContainerName
$ListOfBlobs = $blob.CloudBlobContainer.ListBlobs($BlobName, $true, "Snapshots")

The above command will provide all the snapshot associated with the $BlobName

listSnapshot4

 

Declare the min and max dates (to be removed).

=============================================

  1. Declare minimum date, example:
1
$minDate = [datetime]"01/23/2016 9:00 AM"
  1. Declare maximum date, example:
1
$maxDate = [datetime]"02/24/2016 9:00 PM"

The above is telling you that you want to delete the blob snapshots from $minDate (“01/23/2016 9:00 AM”) to $maxDate (“02/24/2016 9:00 PM”).

 

Delete the snapshot of a blob.

=============================================

  1. For this, I did a foreach iteration to validate first if the blob is a snapshot and then to validate if it is in the range of dates I defined in previous step.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
foreach ($CloudBlockBlob in $ListOfBlobs)
{
  if ($CloudBlockBlob.IsSnapshot)
  {
    if ($CloudBlockBlob.SnapshotTime -le $maxDate -and $CloudBlockBlob.SnapshotTime -ge $minDate )
    {
      $CloudBlockBlob.Delete()
    }
  }
}

Finally, you can list again the snapshot list and you will see that you already deleted the ones for the dates you defined.

listSnapshot5

Happy coding !!

Cloud Services PaaS – Common scenarios for SSL certificate configuration

$
0
0

This article is intended to summarize a few common scenarios for SSL certificate configuration on Cloud Services PaaS. It will cover configuration of multiples certificates for HTTPS communication and certificate installation for general encrypted communication purposes.

 

  • In case you just want to install one certificate on your cloud service to enable HTTPs communication, you should go to this article:

https://azure.microsoft.com/en-us/documentation/articles/cloud-services-configure-ssl-certificate-portal/

 

  • In case you want only one certificate for multiple hostnames names, for example contoso.com / contoso.us / contoso.com.br / *.contoso.com you can use a Subject Alternative Name (SAN) certificate and go to this article:

https://blogs.msdn.microsoft.com/cie/2013/11/11/multiple-domain-name-to-the-same-cloud-service-and-ssl-certificates/

 

  • In case you have chained certificates to install on your cloud service, you should go to this article:

https://blogs.msdn.microsoft.com/azuredevsupport/2010/02/24/how-to-install-a-chained-ssl-certificate/

 

  • In case you want have multiple certificates and need just to install all of them on your cloud service just to be used by the application (this is not for website binding on port 443) you should do 2 things:
    1. first upload the certificates to the portal Step 3 and 4 on this article: https://azure.microsoft.com/en-us/documentation/articles/cloud-services-configure-ssl-certificate-portal/
    2. configure the certificate on “Properties” of your roles (figure 1) and add as much certificates as you need on the “Add Certificate” (figure 2).

WebRole1 properties

Figure 1. WebRole1 properties

 

Add certificates

Figure 2. Add certificates

 

After you deploy this project those certificates will be only installed on the WebRole1 and they will also be shown on your IIS manager

 

Certificate Store / IIS certificates

Figure 3. Certificate Store / IIS certificates

 

 

  • In case you have multiple certificates with different hostnames and need to add them to the website bindings on port 443 on all your WebRoles for HTTPS communication, beyond doing the previous item which will install the certificate on the server, you should also enable site HTTPS bindings programmatically via the application code on start or startup task but you have to make sure that IIS is ready in order to change its configuration by doing some checks on code on both implementations and enable Server Name Indication (SNI) on all bindings entries (figure 4) because a given server can only provide different certificates by using the same IP:Port binding combination if the server is configured to use SNI. These articles below can be used as reference to add the bindings programmatically:

https://www.iis.net/configreference/system.applicationhost/sites/sitedefaults/bindings/binding

https://blogs.msdn.microsoft.com/jianwu/2014/12/17/expose-ssl-service-to-multi-domains-from-the-same-cloud-service/

 

IIS HTTPS binding with SNI checked

Figure 4. IIS HTTPS binding with SNI checked

 

This article below explains how SSL handshake works when there is more than one binding for the same IP:Port combination using different certificates on IIS8. There is a common configuration issue which relates to step 3 on the handshake in which server will check for a legacy binding, which means a binding entry without SNI option checked, then provide this certificate to clients. In this case IIS will provide the certificate in which the binding where SNI option is not checked, for all HTTPS bindings on this server even if the other bindings have SNI option checked.

https://blogs.msdn.microsoft.com/kaushal/2012/10/11/central-certificate-store-ccs-with-iis-8-windows-server-2012/#commentmessage


Below steps outline, how the SSL handshake works with a CCS binding on the IIS 8 web server:

  1. The client and the server establish a TCP connection via TCP handshake.
  2. The client sends a Client Hello to the server. This packet contains the specific protocol version, list of supported cipher suites along with the hostname (let’s say www.outlook.com provided its a SNI compliant browser). The TCP/IP headers in the packet contain the IPAddress and the Port number.
  3. The server checks the registry (legacy bindings) to find a certificate hash/thumbprint corresponding to the above combination of IP:Port.
  4. If there is no legacy binding for that IP:Port, then server uses the port number from the Client Hello to check the registry for a CCS binding for this port. The server checks the below key to find the binding information: HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslCcsBindingInfo
  5. If the above step fails i.e., if the server couldn’t find a corresponding CCS binding for that port, then it would fallback to the legacy binding. (If this is absent then the SSL handshake would fail).
  6. If Step 4 succeeds. The hostname (from Client Hello) is used to generate a filename like hostname.pfx. The filename is passed as a parameter along with the other details (CCS Configuration) to the crypto API’s which in turn call the File System API’s to retrieve the corresponding certificate from the Central Certificate Store (File Share). The retrieved certificate is cached and the corresponding certificate without private key is added to the Server Hello and sent to the client.
  7. If it cannot find a filename, then it falls back to Step 5.

 

 

Viewing all 33 articles
Browse latest View live