Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the endpoint URL requirement for the various client constructors #19274

Closed
cmackenzie1 opened this issue Oct 3, 2022 · 16 comments · Fixed by #19293
Closed

Document the endpoint URL requirement for the various client constructors #19274

cmackenzie1 opened this issue Oct 3, 2022 · 16 comments · Fixed by #19293
Assignees
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. Docs needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Storage Storage Service (Queues, Blobs, Files)

Comments

@cmackenzie1
Copy link

Feature Request

Previously, before v0.5.0, one could call UploadStream without specifying the container name, which is already present in the URL when instantiating the client using azblob.NewClientWithNoCredential.

Now with v0.5.0, you must specify the container name in the client.UploadStream args.

Is it possible to get a utility function to get the container name from the URL used when creating the client?

@ghost ghost added needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that labels Oct 3, 2022
@ghost
Copy link

ghost commented Oct 3, 2022

Hi @cmackenzie1. Thank you for your feedback and we will look into it soon. Meanwhile, feel free to share your experience using the Azure SDK in this survey.

@jhendrixMSFT jhendrixMSFT added Storage Storage Service (Queues, Blobs, Files) and removed needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. labels Oct 4, 2022
@ghost ghost added the needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team label Oct 4, 2022
@jhendrixMSFT
Copy link
Member

jhendrixMSFT commented Oct 4, 2022

You can use azblob.ParseURL() to parse a blob URL into its constituent parts.

Would you mind telling us a bit more about your scenario? When we redisgned azblob, we envisioned the client in the root package to take just a blob storage service URL (I see that we don't properly document this). I'd like to understand the pattern you're using to see if we need to make some changes.

@cmackenzie1
Copy link
Author

I am working on upgrading the SDK for the integration at Cloudflare to push customers logs to Azure Blob storage 1.

Customers will go through the process to create a storage account and a containter then generate a SAS token for us to use on their behalf. During the SAS create token flow in the UI, the user is presented with a URL in what looks like the format <accountName>.blob.core.windows.net/<containerName>

ie,

https://mirio.blob.core.windows.net/cloudflare-logs?sp=w&st=2022-10-04T15:41:43Z&se=2023-10-04T23:41:43Z&spr=https&sv=2021-06-08&sr=c&sig=<REDACTED>

The customer then submits the entire SAS URL when creating a "Logpush" job. Previously, before the v0.5.0 version we simply just used the URL as provided to create a blob container client and then called client.UploadStream with the io.Reader and it worked.

we envisioned the client in the root package to take just a blob storage service URL

Going back to this, my understanding is that both https://mirio.blob.core.windows.net/cloudflare-logs and https://mirio.blob.core.windows.net/ would be considered valid blob storage URL's and first one would be limit the client in scope to just the contianer specified in the URL. Is that the case or current implementation for more "qualified" URLs when constructing a client?

Footnotes

  1. https://developers.cloudflare.com/logs/get-started/enable-destinations/azure/

@jhendrixMSFT
Copy link
Member

Thanks for sharing the details. Your understanding is correct.

In our example, the thinking is that the client and its operations are hierarchical. You'd create the client for the service URL and then manipulate containers/blobs within in.

You do bring up a good point though that if you start with a container/blob URL it's not very intuitive. I will discuss this with the team to see how we want to proceed.

In the meantime, you have a few options.

  • Pass only the service URL to the azblob client constructor,and use the container/blob parameters for the APIs.
  • If you're always starting with a container URL, you could use the client from the container package. You would then use one of the New*BlobClient() methods to create the blob client of choice (this is a bit more of the power-user scenario).

@cmackenzie1
Copy link
Author

Thanks for the quick responses @jhendrixMSFT , very much appreciated.

I am OK with going the route creating the client and getting the container name from the URL. It seems the most straight forward and inline with the other object storage clients we use like S3 and GCS.

client, err := azblob.NewClientWithNoCredential(blobUrl, nil)
parts, err := azblob.ParseURL(blobUrl)
if err != nil {
    return nil, fmt.Errorf("invalid url: %v", err)
}
containerName = parts.ContainerName

@jhendrixMSFT
Copy link
Member

Just make sure that when you call azblob.NewClientWithNoCredential() that blobUrl is only the service URL and not the full blob URL. The constructor will accept it, but the APIs won't work as expected (we need to document and fix this).

@cmackenzie1
Copy link
Author

Thanks. While testing I ran into interesting behavior so let me know if I should create a new issue or not for it.

When doing the following I end up with an unnamed object under the prefix of key instead of the key itself being the object name.

parts, err := azblob.ParseURL(blobUrl)
if err != nil {
   panic(fmt.Errorf("invalid url: %v", err))
}

client, err := azblob.NewClientWithNoCredential(parts.String(), nil)
if err != nil {
    panic(fmt.Errorf("unable to create azure blob client: %v", err))
}

_, err := c.client.UploadStream(ctx, containerName, key, body, &azblob.UploadStreamOptions{
		BlockSize: 2 * 1024 * 1024, // 2 MB
		HTTPHeaders: &blob.HTTPHeaders{
			BlobContentEncoding: &contentEncoding,
			BlobContentType:     &contentType,
		},
	})

Screen Shot 2022-10-04 at 10 54 38 AM

@jhendrixMSFT
Copy link
Member

This is what I was alluding to earlier.

When you call one of the Upload APIs, it concatenates the containerName and blobName params to the URL passed to the client constructor. This is why you must pass only the service URL to the client constructor (like in the example).

@cmackenzie1
Copy link
Author

cmackenzie1 commented Oct 4, 2022

I tried with just https://mirio.blob.core.windows.net/?se=2023-10-04T23%3A41%3A43Z&sig=REDACTED&sp=w&spr=https&sr=c&st=2022-10-04T15%3A41%3A43Z&sv=2021-06-08 as the serviceUrl and manually specifying the containerName and key - I get the same result.

I was following this example,
https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azblob/examples_test.go#L51-L70

@cmackenzie1
Copy link
Author

cmackenzie1 commented Oct 4, 2022

Found the underlying issue. It only occurs when the URL contains any query parameters. A final / is appended which creates the unnamed resource.

https://cs.github.com/Azure/azure-sdk-for-go/blob/51f810429dd6f9a5233d076e449b2b212b7d52bc/sdk/azcore/runtime/request.go?q=JoinPaths#L66-L68

In go 1.19 a new method for joining URL paths was added, https://pkg.go.dev/net/[email protected]#JoinPath

@jhendrixMSFT
Copy link
Member

#19245 is in progress to get this fixed.

@cmackenzie1
Copy link
Author

Ah thank you @jhendrixMSFT. I know you mentioned / alluded to it before but wasn't sure if this was that very same behavior. I'll keep an eye on #19245. I think it's safe to close this issue now.

@jhendrixMSFT
Copy link
Member

I will keep this open to track fixing the docs so it's clear that the constructors take only the service URL (the context here is very helpful).

@jhendrixMSFT jhendrixMSFT changed the title azblob: add functions to parse container name from URL. Document the endpoint URL requirement for the various client constructors Oct 4, 2022
@jhendrixMSFT
Copy link
Member

@cmackenzie1 can I ask where you got the SAS URL in your earlier comment? Looks like we also add a superfluous trailing / in the GetSASURL APIs, so I'm curious if this is how you obtained it (or maybe the bug is elsewhere).

@jhendrixMSFT
Copy link
Member

Oh looks like the portal provides this.

@cmackenzie1
Copy link
Author

Yup, the SAS URL was generated in the Azure UI. Storage accounts > Containers > Shared access token

@github-actions github-actions bot locked and limited conversation to collaborators Apr 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. Docs needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants