charmhelpers.contrib.storage.linux package

charmhelpers.contrib.storage.linux.ceph module

class charmhelpers.contrib.storage.linux.ceph.CephBrokerRq(api_version=1, request_id=None)

Bases: object

Ceph broker request.

Multiple operations can be added to a request and sent to the Ceph broker to be executed.

Request is json-encoded for sending over the wire.

The API is versioned and defaults to version 1.

add_op(op)

Add an op if it is not already in the list.

Parameters:op (dict) – Operation to add.
add_op_create_erasure_pool(name, erasure_profile=None, weight=None, group=None, app_name=None, max_bytes=None, max_objects=None)

Adds an operation to create a erasure coded pool.

Parameters:
  • name (str) – Name of pool to create
  • erasure_profile (str) – Name of erasure code profile to use. If not set the ceph-mon unit handling the broker request will set its default value.
  • weight (float) – The percentage of data that is expected to be contained in the pool from the total available space on the OSDs.
  • group (str) – Group to add pool to
  • app_name (str) – (Optional) Tag pool with application name. Note that there is certain protocols emerging upstream with regard to meaningful application names to use. Examples are rbd and rgw.
  • max_bytes (int) – Maximum bytes quota to apply
  • max_objects (int) – Maximum objects quota to apply
add_op_create_pool(name, replica_count=3, pg_num=None, weight=None, group=None, namespace=None, app_name=None, max_bytes=None, max_objects=None)

DEPRECATED: Use add_op_create_replicated_pool() or add_op_create_erasure_pool() instead.

add_op_create_replicated_pool(name, replica_count=3, pg_num=None, weight=None, group=None, namespace=None, app_name=None, max_bytes=None, max_objects=None)

Adds an operation to create a replicated pool.

Parameters:
  • name (str) – Name of pool to create
  • replica_count (int) – Number of copies Ceph should keep of your data.
  • pg_num (int) – Request specific number of Placement Groups to create for pool.
  • weight (float) – The percentage of data that is expected to be contained in the pool from the total available space on the OSDs. Used to calculate number of Placement Groups to create for pool.
  • group (str) – Group to add pool to
  • namespace (str) – Group namespace
  • app_name (str) – (Optional) Tag pool with application name. Note that there is certain protocols emerging upstream with regard to meaningful application names to use. Examples are rbd and rgw.
  • max_bytes (int) – Maximum bytes quota to apply
  • max_objects (int) – Maximum objects quota to apply
add_op_request_access_to_group(name, namespace=None, permission=None, key_name=None, object_prefix_permissions=None)

Adds the requested permissions to the current service’s Ceph key, allowing the key to access only the specified pools or object prefixes. object_prefix_permissions should be a dictionary keyed on the permission with the corresponding value being a list of prefixes to apply that permission to.

{
‘rwx’: [‘prefix1’, ‘prefix2’], ‘class-read’: [‘prefix3’]}
request
set_ops(ops)

Set request ops to provided value.

Useful for injecting ops that come from a previous request to allow comparisons to ensure validity.

class charmhelpers.contrib.storage.linux.ceph.CephBrokerRsp(encoded_rsp)

Bases: object

Ceph broker response.

Response is json-decoded and contents provided as methods/properties.

The API is versioned and defaults to version 1.

exit_code
exit_msg
request_id
class charmhelpers.contrib.storage.linux.ceph.CephConfContext(permitted_sections=None)

Bases: object

Ceph config (ceph.conf) context.

Supports user-provided Ceph configuration settings. Use can provide a dictionary as the value for the config-flags charm option containing Ceph configuration settings keyede by their section in ceph.conf.

class charmhelpers.contrib.storage.linux.ceph.ErasurePool(service, name, erasure_code_profile='default', percent_data=10.0, app_name=None)

Bases: charmhelpers.contrib.storage.linux.ceph.Pool

create()
class charmhelpers.contrib.storage.linux.ceph.Pool(service, name)

Bases: object

An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().

add_cache_tier(cache_pool, mode)

Adds a new cache tier to an existing pool. :param cache_pool: six.string_types. The cache tier pool name to add. :param mode: six.string_types. The caching mode to use for this pool. valid range = [“readonly”, “writeback”] :return: None

create()
get_pgs(pool_size, percent_data=10.0, device_class=None)

Return the number of placement groups to use when creating the pool.

Returns the number of placement groups which should be specified when creating the pool. This is based upon the calculation guidelines provided by the Ceph Placement Group Calculator (located online at http://ceph.com/pgcalc/).

The number of placement groups are calculated using the following:

(Pool size)

Per the upstream guidelines, the OSD # should really be considered based on the number of OSDs which are eligible to be selected by the pool. Since the pool creation doesn’t specify any of CRUSH set rules, the default rule will be dependent upon the type of pool being created (replicated or erasure).

This code makes no attempt to determine the number of OSDs which can be selected for the specific rule, rather it is left to the user to tune in the form of ‘expected-osd-count’ config option.

Parameters:
  • pool_size – int. pool_size is either the number of replicas for replicated pools or the K+M sum for erasure coded pools
  • percent_data – float. the percentage of data that is expected to be contained in the pool for the specific OSD set. Default value is to assume 10% of the data is for this pool, which is a relatively low % of the data but allows for the pg_num to be increased. NOTE: the default is primarily to handle the scenario where related charms requiring pools has not been upgraded to include an update to indicate their relative usage of the pools.
  • device_class – str. class of storage to use for basis of pgs calculation; ceph supports nvme, ssd and hdd by default based on presence of devices of each type in the deployment.
Returns:

int. The number of pgs to use.

remove_cache_tier(cache_pool)

Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete. :param cache_pool: six.string_types. The cache tier pool name to remove. :return: None

exception charmhelpers.contrib.storage.linux.ceph.PoolCreationError(message)

Bases: Exception

A custom error to inform the caller that a pool creation failed. Provides an error message

class charmhelpers.contrib.storage.linux.ceph.ReplicatedPool(service, name, pg_num=None, replicas=2, percent_data=10.0, app_name=None)

Bases: charmhelpers.contrib.storage.linux.ceph.Pool

create()
charmhelpers.contrib.storage.linux.ceph.add_key(service, key)

Add a key to a keyring.

Creates the keyring if it doesn’t already exist.

Logs and returns if the key is already in the keyring.

charmhelpers.contrib.storage.linux.ceph.configure(service, key, auth, use_syslog)

Perform basic configuration of Ceph.

charmhelpers.contrib.storage.linux.ceph.copy_files(src, dst, symlinks=False, ignore=None)

Copy files from src to dst.

charmhelpers.contrib.storage.linux.ceph.create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host', data_chunks=2, coding_chunks=1, locality=None, durability_estimator=None, device_class=None)

Create a new erasure code profile if one does not already exist for it. Updates the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ for more details :param service: six.string_types. The Ceph user name to run the command under :param profile_name: six.string_types :param erasure_plugin_name: six.string_types :param failure_domain: six.string_types. One of [‘chassis’, ‘datacenter’, ‘host’, ‘osd’, ‘pdu’, ‘pod’, ‘rack’, ‘region’,

‘room’, ‘root’, ‘row’])
Parameters:
  • data_chunks – int
  • coding_chunks – int
  • locality – int
  • durability_estimator – int
  • device_class – six.string_types
Returns:

None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.create_key_file(service, key)

Create a file containing key.

charmhelpers.contrib.storage.linux.ceph.create_keyring(service, key)

Deprecated. Please use the more accurately named ‘add_key’

charmhelpers.contrib.storage.linux.ceph.create_pool(service, name, replicas=3, pg_num=None)

Create a new RADOS pool.

charmhelpers.contrib.storage.linux.ceph.create_rbd_image(service, pool, image, sizemb)

Create a new RADOS block device.

charmhelpers.contrib.storage.linux.ceph.delete_keyring(service)

Delete an existing Ceph keyring.

charmhelpers.contrib.storage.linux.ceph.delete_pool(service, name)

Delete a RADOS pool from ceph.

charmhelpers.contrib.storage.linux.ceph.enable_pg_autoscale(service, pool_name)

Enable Ceph’s PG autoscaler for the specified pool.

Parameters:
  • service – six.string_types. The Ceph user name to run the command under
  • pool_name – six.string_types. The name of the pool to enable sutoscaling on
Raise:

CalledProcessError if the command fails

charmhelpers.contrib.storage.linux.ceph.enabled_manager_modules()

Return a list of enabled manager modules.

Return type:List[str]
charmhelpers.contrib.storage.linux.ceph.ensure_ceph_keyring(service, user=None, group=None, relation='ceph', key=None)

Ensures a ceph keyring is created for a named service and optionally ensures user and group ownership.

@returns boolean: Flag to indicate whether a key was successfully written
to disk based on either relation data or a supplied key
charmhelpers.contrib.storage.linux.ceph.ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, blk_device, fstype, system_services=[], replicas=3)

NOTE: This function must only be called from a single service unit for the same rbd_img otherwise data loss will occur.

Ensures given pool and RBD image exists, is mapped to a block device, and the device is formatted and mounted at the given mount_point.

If formatting a device for the first time, data existing at mount_point will be migrated to the RBD device before being re-mounted.

All services listed in system_services will be stopped prior to data migration and restarted when complete.

charmhelpers.contrib.storage.linux.ceph.erasure_profile_exists(service, name)

Check to see if an Erasure code profile already exists. :param service: six.string_types. The Ceph user name to run the command under :param name: six.string_types :return: int or None

charmhelpers.contrib.storage.linux.ceph.filesystem_mounted(fs)

Determine whether a filesytems is already mounted.

charmhelpers.contrib.storage.linux.ceph.get_broker_rsp_key()

Return broker response key for this unit

This is the key that ceph is going to use to pass request status information back to this unit

charmhelpers.contrib.storage.linux.ceph.get_cache_mode(service, pool_name)

Find the current caching mode of the pool_name given. :param service: six.string_types. The Ceph user name to run the command under :param pool_name: six.string_types :return: int or None

charmhelpers.contrib.storage.linux.ceph.get_ceph_nodes(relation='ceph')

Query named relation to determine current nodes.

charmhelpers.contrib.storage.linux.ceph.get_erasure_profile(service, name)
Parameters:
  • service – six.string_types. The Ceph user name to run the command under
  • name
Returns:

charmhelpers.contrib.storage.linux.ceph.get_mon_map(service)

Returns the current monitor map. :param service: six.string_types. The Ceph user name to run the command under :return: json string. :raise: ValueError if the monmap fails to parse.

Also raises CalledProcessError if our ceph command fails
charmhelpers.contrib.storage.linux.ceph.get_osds(service, device_class=None)

Return a list of all Ceph Object Storage Daemons currently in the cluster (optionally filtered by storage device class).

Parameters:device_class (str) – Class of storage device for OSD’s
charmhelpers.contrib.storage.linux.ceph.get_previous_request(rid)

Return the last ceph broker request sent on a given relation

@param rid: Relation id to query for request

charmhelpers.contrib.storage.linux.ceph.get_request_states(request, relation='ceph')
Return a dict of requests per relation id with their corresponding
completion state.

This allows a charm, which has a request for ceph, to see whether there is an equivalent request already being processed and if so what state that request is in.

@param request: A CephBrokerRq object

charmhelpers.contrib.storage.linux.ceph.has_broker_rsp(rid=None, unit=None)

Return True if the broker_rsp key is ‘truthy’ (i.e. set to something) in the relation data.

Parameters:
  • rid (Union[str, None]) – The relation to check (default of None means current relation)
  • unit (Union[str, None]) – The remote unit to check (default of None means current unit)
Returns:

True if broker key exists and is set to something ‘truthy’

Return type:

bool

charmhelpers.contrib.storage.linux.ceph.hash_monitor_names(service)

Uses the get_mon_map() function to get information about the monitor cluster. Hash the name of each monitor. Return a sorted list of monitor hashes in an ascending order. :param service: six.string_types. The Ceph user name to run the command under :rtype : dict. json dict of monitor name, ip address and rank example: {

‘name’: ‘ip-172-31-13-165’, ‘rank’: 0, ‘addr’: ‘172.31.13.165:6789/0’}
charmhelpers.contrib.storage.linux.ceph.image_mapped(name)

Determine whether a RADOS block device is mapped locally.

charmhelpers.contrib.storage.linux.ceph.install()

Basic Ceph client installation.

charmhelpers.contrib.storage.linux.ceph.is_broker_action_done(action, rid=None, unit=None)

Check whether broker action has completed yet.

@param action: name of action to be performed @returns True if action complete otherwise False

charmhelpers.contrib.storage.linux.ceph.is_request_complete(request, relation='ceph')

Check to see if a functionally equivalent request has already been completed

Returns True if a similair request has been completed

@param request: A CephBrokerRq object

charmhelpers.contrib.storage.linux.ceph.is_request_complete_for_rid(request, rid)

Check if a given request has been completed on the given relation

@param request: A CephBrokerRq object @param rid: Relation ID

charmhelpers.contrib.storage.linux.ceph.is_request_sent(request, relation='ceph')

Check to see if a functionally equivalent request has already been sent

Returns True if a similair request has been sent

@param request: A CephBrokerRq object

charmhelpers.contrib.storage.linux.ceph.make_filesystem(blk_device, fstype='ext4', timeout=10)

Make a new filesystem on the specified block device.

charmhelpers.contrib.storage.linux.ceph.map_block_storage(service, pool, image)

Map a RADOS block device for local use.

charmhelpers.contrib.storage.linux.ceph.mark_broker_action_done(action, rid=None, unit=None)

Mark action as having been completed.

@param action: name of action to be performed @returns None

charmhelpers.contrib.storage.linux.ceph.monitor_key_delete(service, key)

Delete a key and value pair from the monitor cluster :param service: six.string_types. The Ceph user name to run the command under Deletes a key value pair on the monitor cluster. :param key: six.string_types. The key to delete.

charmhelpers.contrib.storage.linux.ceph.monitor_key_exists(service, key)

Searches for the existence of a key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for :return: Returns True if the key exists, False if not and raises an

exception if an unknown error occurs. :raise: CalledProcessError if an unknown error occurs
charmhelpers.contrib.storage.linux.ceph.monitor_key_get(service, key)

Gets the value of an existing key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for. :return: Returns the value of that key or None if not found.

charmhelpers.contrib.storage.linux.ceph.monitor_key_set(service, key, value)

Sets a key value pair on the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to set. :param value: The value to set. This will be converted to a string

before setting
charmhelpers.contrib.storage.linux.ceph.place_data_on_block_device(blk_device, data_src_dst)

Migrate data in data_src_dst to blk_device and then remount.

charmhelpers.contrib.storage.linux.ceph.pool_exists(service, name)

Check to see if a RADOS pool already exists.

charmhelpers.contrib.storage.linux.ceph.pool_set(service, pool_name, key, value)

Sets a value for a RADOS pool in ceph. :param service: six.string_types. The Ceph user name to run the command under :param pool_name: six.string_types :param key: six.string_types :param value: :return: None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.rbd_exists(service, pool, rbd_img)

Check to see if a RADOS block device exists.

charmhelpers.contrib.storage.linux.ceph.remove_erasure_profile(service, profile_name)

Create a new erasure code profile if one does not already exist for it. Updates the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ for more details :param service: six.string_types. The Ceph user name to run the command under :param profile_name: six.string_types :return: None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.remove_pool_quota(service, pool_name)

Set a byte quota on a RADOS pool in ceph. :param service: six.string_types. The Ceph user name to run the command under :param pool_name: six.string_types :return: None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.remove_pool_snapshot(service, pool_name, snapshot_name)

Remove a snapshot from a RADOS pool in ceph. :param service: six.string_types. The Ceph user name to run the command under :param pool_name: six.string_types :param snapshot_name: six.string_types :return: None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.rename_pool(service, old_name, new_name)

Rename a Ceph pool from old_name to new_name :param service: six.string_types. The Ceph user name to run the command under :param old_name: six.string_types :param new_name: six.string_types :return: None

charmhelpers.contrib.storage.linux.ceph.send_request_if_needed(request, relation='ceph')

Send broker request if an equivalent request has not already been sent

@param request: A CephBrokerRq object

charmhelpers.contrib.storage.linux.ceph.set_app_name_for_pool(client, pool, name)

Calls osd pool application enable for the specified pool name

Parameters:
  • client (str) – Name of the ceph client to use
  • pool (str) – Pool to set app name for
  • name (str) – app name for the specified pool
Raises:

CalledProcessError if ceph call fails

charmhelpers.contrib.storage.linux.ceph.set_pool_quota(service, pool_name, max_bytes=None, max_objects=None)
Parameters:
  • service (str) – The Ceph user name to run the command under
  • pool_name (str) – Name of pool
  • max_bytes (int) – Maximum bytes quota to apply
  • max_objects (int) – Maximum objects quota to apply
Raises:

subprocess.CalledProcessError

charmhelpers.contrib.storage.linux.ceph.snapshot_pool(service, pool_name, snapshot_name)

Snapshots a RADOS pool in ceph. :param service: six.string_types. The Ceph user name to run the command under :param pool_name: six.string_types :param snapshot_name: six.string_types :return: None. Can raise CalledProcessError

charmhelpers.contrib.storage.linux.ceph.update_pool(client, pool, settings)
charmhelpers.contrib.storage.linux.ceph.validator(value, valid_type, valid_range=None)

Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values Example input:

validator(value=1,
valid_type=int, valid_range=[0, 2])

This says I’m testing value=1. It must be an int inclusive in [0,2]

Parameters:
  • value – The value to validate
  • valid_type – The type that value should be.
  • valid_range – A range of values that value can assume.
Returns:

charmhelpers.contrib.storage.linux.loopback module

charmhelpers.contrib.storage.linux.loopback.create_loopback(file_path)

Create a loopback device for a given backing file.

Returns:str: Full path to new loopback device (eg, /dev/loop0)
charmhelpers.contrib.storage.linux.loopback.ensure_loopback_device(path, size)

Ensure a loopback device exists for a given backing file path and size. If it a loopback device is not mapped to file, a new one will be created.

TODO: Confirm size of found loopback device.

Returns:str: Full path to the ensured loopback device (eg, /dev/loop0)
charmhelpers.contrib.storage.linux.loopback.is_mapped_loopback_device(device)

Checks if a given device name is an existing/mapped loopback device. :param device: str: Full path to the device (eg, /dev/loop1). :returns: str: Path to the backing file if is a loopback device empty string otherwise

charmhelpers.contrib.storage.linux.loopback.loopback_devices()

Parse through ‘losetup -a’ output to determine currently mapped loopback devices. Output is expected to look like:

/dev/loop0: [0807]:961814 (/tmp/my.img)

or:

/dev/loop0: [0807]:961814 (/tmp/my.img (deleted))
Returns:dict: a dict mapping {loopback_dev: backing_file}

charmhelpers.contrib.storage.linux.lvm module

charmhelpers.contrib.storage.linux.lvm.create_logical_volume(lv_name, volume_group, size=None)

Create a new logical volume in an existing volume group

Parameters:
  • lv_name – str: name of logical volume to be created.
  • volume_group – str: Name of volume group to use for the new volume.
  • size – str: Size of logical volume to create (100% if not supplied)
Raises:

subprocess.CalledProcessError – in the event that the lvcreate fails.

charmhelpers.contrib.storage.linux.lvm.create_lvm_physical_volume(block_device)

Initialize a block device as an LVM physical volume.

Parameters:block_device – str: Full path of block device to initialize.
charmhelpers.contrib.storage.linux.lvm.create_lvm_volume_group(volume_group, block_device)

Create an LVM volume group backed by a given block device.

Assumes block device has already been initialized as an LVM PV.

Parameters:volume_group – str: Name of volume group to create.
Block_device:str: Full path of PV-initialized block device.
charmhelpers.contrib.storage.linux.lvm.deactivate_lvm_volume_group(block_device)

Deactivate any volume gruop associated with an LVM physical volume.

Parameters:block_device – str: Full path to LVM physical volume
charmhelpers.contrib.storage.linux.lvm.extend_logical_volume_by_device(lv_name, block_device)

Extends the size of logical volume lv_name by the amount of free space on physical volume block_device.

Parameters:
  • lv_name – str: name of logical volume to be extended (vg/lv format)
  • block_device – str: name of block_device to be allocated to lv_name
charmhelpers.contrib.storage.linux.lvm.is_lvm_physical_volume(block_device)

Determine whether a block device is initialized as an LVM PV.

Parameters:block_device – str: Full path of block device to inspect.
Returns:boolean: True if block device is a PV, False if not.
charmhelpers.contrib.storage.linux.lvm.list_logical_volumes(select_criteria=None, path_mode=False)

List logical volumes

Parameters:
  • select_criteria – str: Limit list to those volumes matching this criteria (see ‘lvs -S help’ for more details)
  • path_mode – bool: return logical volume name in ‘vg/lv’ format, this format is required for some commands like lvextend
Returns:

[str]: List of logical volumes

charmhelpers.contrib.storage.linux.lvm.list_lvm_volume_group(block_device)

List LVM volume group associated with a given block device.

Assumes block device is a valid LVM PV.

Parameters:block_device – str: Full path of block device to inspect.
Returns:str: Name of volume group associated with block device or None
charmhelpers.contrib.storage.linux.lvm.list_thin_logical_volume_pools(*, select_criteria='lv_attr =~ ^t', path_mode=False)

List logical volumes

Parameters:
  • select_criteria – str: Limit list to those volumes matching this criteria (see ‘lvs -S help’ for more details)
  • path_mode – bool: return logical volume name in ‘vg/lv’ format, this format is required for some commands like lvextend
Returns:

[str]: List of logical volumes

charmhelpers.contrib.storage.linux.lvm.list_thin_logical_volumes(*, select_criteria='lv_attr =~ ^V', path_mode=False)

List logical volumes

Parameters:
  • select_criteria – str: Limit list to those volumes matching this criteria (see ‘lvs -S help’ for more details)
  • path_mode – bool: return logical volume name in ‘vg/lv’ format, this format is required for some commands like lvextend
Returns:

[str]: List of logical volumes

charmhelpers.contrib.storage.linux.lvm.remove_lvm_physical_volume(block_device)

Remove LVM PV signatures from a given block device.

Parameters:block_device – str: Full path of block device to scrub.

charmhelpers.contrib.storage.linux.utils module

charmhelpers.contrib.storage.linux.utils.is_block_device(path)

Confirm device at path is a valid block device node.

Returns:boolean: True if path is a block device, False if not.
charmhelpers.contrib.storage.linux.utils.is_device_mounted(device)

Given a device path, return True if that device is mounted, and False if it isn’t.

Parameters:device – str: Full path of the device to check.
Returns:boolean: True if the path represents a mounted device, False if it doesn’t.
charmhelpers.contrib.storage.linux.utils.is_luks_device(dev)

Determine if dev is a LUKS-formatted block device.

Param:dev: A full path to a block device to check for LUKS header

presence :returns: boolean: indicates whether a device is used based on LUKS header.

charmhelpers.contrib.storage.linux.utils.is_mapped_luks_device(dev)

Determine if dev is a mapped LUKS device :param: dev: A full path to a block device to be checked :returns: boolean: indicates whether a device is mapped

charmhelpers.contrib.storage.linux.utils.mkfs_xfs(device, force=False, inode_size=1024)

Format device with XFS filesystem.

By default this should fail if the device already has a filesystem on it. :param device: Full path to device to format :ptype device: tr :param force: Force operation :ptype: force: boolean :param inode_size: XFS inode size in bytes :ptype inode_size: int

charmhelpers.contrib.storage.linux.utils.zap_disk(block_device)

Clear a block device of partition table. Relies on sgdisk, which is installed as pat of the ‘gdisk’ package in Ubuntu.

Parameters:block_device – str: Full path of block device to clean.