Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Block CIMs #2261

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

ambarve
Copy link
Contributor

@ambarve ambarve commented Sep 9, 2024

This PR adds support for creating, mounting and using block CIMs for running windows process isolated containers.
Individual commits provide additional details on what changes they bring in.

@ambarve ambarve force-pushed the blocked_cim_import branch 3 times, most recently from ad3aed4 to 2afb976 Compare September 10, 2024 20:48
@ambarve ambarve changed the title Blocked cim import Support for Block CIMs Sep 10, 2024
@ambarve ambarve marked this pull request as ready for review September 11, 2024 18:17
@ambarve ambarve requested a review from a team as a code owner September 11, 2024 18:17
pkg/cimfs/cimfs.go Outdated Show resolved Hide resolved
//sys CimMountImage(imagePath string, fsName string, flags uint32, volumeID *g) (hr error) = cimfs.CimMountImage?
//sys CimDismountImage(volumeID *g) (hr error) = cimfs.CimDismountImage?

//sys CimCreateImage(imagePath string, oldFSName *uint16, newFSName *uint16, cimFSHandle *FsHandle) (hr error) = cimfs.CimCreateImage?
//sys CimCloseImage(cimFSHandle FsHandle) = cimfs.CimCloseImage?
//sys CimCreateImage2(imagePath string, flags uint32, oldFSName *uint16, newFSName *uint16, cimFSHandle *FsHandle) (hr error) = cimfs.CimCreateImage2?
//sys CimCloseImage(cimFSHandle FsHandle) = cimfs.CimCloseImage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was removing the ? intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The CimCloseImage API doesn't return anything. However, if you keep the ? for an API that returns void, the mkwinsyscall still generates the function definition with a return value. Ideally, we should fix this bug in mkwinsyscall but I unblocked the CimFS work by removing that ?.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the point of ? is that it returns an error instead of panicking if the DLL/function cannot be loaded. So isn't it returning an error what we want?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. But then if we keep the ?, it generates the function definition with a return value and any caller inspecting that return value will see garbage data most of the time (except for cases when it actually isn't able to load that function from the DLL).
We could maybe write a wrapper for CimCloseImage that has a void return but panics in case the underlying function returns an error due to DLL load failure. I prefer removing the ? over the panic approach since the possibility of it not being able to load CimCloseImage after successfully loading other CimFS APIs is pretty low. Let me know what you think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, it seems like we should decide how we want to treat a DLL import load failure: we can panic, or return an error. Then we should be consistent with that behavior (so either ? everywhere, or nowhere).

If we add the ? here, it won't be a "garbage" value, it will just be nil unless the import load failed.

Personally, my view is that returning an error is better, since if the node is messed up DLL load failures could happen and that makes sense to treat as an error case to me. But the more important thing is that we're consistent.

@@ -173,21 +234,41 @@ func (c *CimFsWriter) AddLink(oldPath string, newPath string) error {
return err
}

// Unlink deletes the file at `path` from the image.
// AddMergedLink adds a hard link from `oldPath` to `newPath` in the image. However unlike
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems backward. Shouldn't links be newPath -> oldPath?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still seems wrong?

return winapi.CimDeletePath(c.handle, path)
}

// Adds a tombstone at given path. This ensures that when the the CIMs are merged, the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be a description of what merging CIMs means somewhere. I may have missed it if it's already present though :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a package documentation here , let me know if you want me to add any more details in there.

pkg/cimfs/cimfs.go Outdated Show resolved Hide resolved
// considered the base CIM. (i.e file with the same path in CIM at index 0 will shadow
// files with the same path at all other CIMs)
// When mounting this merged CIM the source CIMs MUST be provided in the exact same order.
func MergeBlockCIMs(mergedCIM *BlockCIM, sourceCIMs []*BlockCIM) (err error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we only support merging block cims?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible in the future to merge file cims and block cims together?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, for now, we only need to merge block CIMs so that's what our code does. (CimFS itself allows merging standard CIMs I think)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the point of pkg/cimfs to expose generic CimFS capabilities, though, rather than scoped to only our needs? If it's not too hard to expose something more generic, that seems like a good path to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this was marked resolved but I don't see a response. I may have missed something.

Currently we have a map which maintains a mapping of CIM & containerd ID to the volume at
which a CIM is mounted for the given container. This was required before the layer
refactoring work when we needed to get the volume path from the layer cim path. However,
this isn't needed anymore. As of now, this map doesn't provide much value and makes the code a
bit complicated. Moreover, we will need to rewrite some of this code anyway when we do the work
required for handling `shim delete` cleanups properly (containerd/containerd#9727).

Signed-off-by: Amit Barve <[email protected]>
@@ -34,10 +33,6 @@ func processBaseLayerHives(layerPath string) ([]pendingCimOp, error) {
}

hivesDirInfo := &winio.FileBasicInfo{
CreationTime: windows.NsecToFiletime(time.Now().UnixNano()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the values of these Times when not set explicitly here?


sfw, err := newStdFileWriter(layerPath, parentLayerPaths)
if err != nil {
return nil, fmt.Errorf("error in creating new standard file writer: %w", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleanup of cim created?

Comment on lines +38 to +39
BlockCIMTypeSingleFile
BlockCIMTypeDevice
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's an example of when you would use one of these instead of the other?

// most CIM is added last.
for _, sCIM := range sourceCIMs {
fullPath := filepath.Join(sCIM.BlockPath, sCIM.CimName)
if err := winapi.CimAddFsToMergedImage2(cim.handle, fullPath, mergeFlag); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we re-use mergeFlag here, we seem to be assuming that all CIMs will either be device or single file, but not a mix. Is this an assumption we have to make? If so, we should validate it against the input CIMs.

Comment on lines +72 to +83
switch mergedCIM.Type {
case BlockCIMTypeDevice:
mountFlags |= CimMountBlockDeviceCim
case BlockCIMTypeSingleFile:
mountFlags |= CimMountSingleFileCim
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to above, what happens if sourceCIMs has a mix of device and single file CIMs?

func (cw *BlockCIMLayerWriter) Remove(name string) error {
// set active write to nil so that we panic if layer tar is incorrectly formatted.
cw.activeWriter = nil
err := cw.cimWriter.AddTombstone(name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it safe to assume that block CIM is always merged and never forked?

@kevpar
Copy link
Member

kevpar commented Nov 14, 2024

It looks like we leak stdFileWriter.root

@@ -170,7 +181,7 @@ func (cw *CimLayerWriter) Close(ctx context.Context) (retErr error) {
}
}()

// UVM based containers aren't supported with CimFS, don't process the UVM layer
// Find out the osversion of this layer, both base & non-base layers can have UtilityVM layer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this mean? I don't see us checking the build version or anything here?

CimFS now supports a new format for storing CIMs, named BlockCIM.  A block CIM format can
store the entire CIM on a block device (like a VHD) or a file formatted like a block
device.

This commit adds Go wrappers for the new CimFS APIs that allow creation, merging and
mounting of such Block CIMs. Some new flags required when creating and mounting these CIMs
are added and some deprecated flags have been removed. New type has been introduced to
represent a block CIM. Unit tests have been added to test the newly added CimFS
functionality. Lastly, CimFS flags aren't a part of the hcs schema (only the CimMount
request is), those flags are moved from the hcs/schema2 package to the cimfs package.

Signed-off-by: Amit Barve <[email protected]>
This commit adds a layer writer that can be used for extracting an image layer tar into a
Block CIM format.

Existing forked CIM layer writer was renamed to a common base type `cimLayerWriter`.
Forked CIM layer writer & Block CIM layer writer both now extend this common base type to
write layers in that specific format.

This commit also removes some code that used `time.Now()` as the default timestamps for
some files that it creates within the layer CIM. These timestamps cause differences in the
layer CIMs generated from the same layer tar. This change fixes that.

Signed-off-by: Amit Barve <[email protected]>
This commit adds the ability to parse block CIM layer mounts and to mount the merged block
CIMs to be used as a rootfs for a container.

Signed-off-by: Amit Barve <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants