cloud issueshttps://gitlab.alpinelinux.org/groups/alpine/cloud/-/issues2024-03-28T00:37:38Zhttps://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/158NoCloud x86_64 BIOS images hang at preboot in libvirt2024-03-28T00:37:38Zjon ⚝NoCloud x86_64 BIOS images hang at preboot in libvirtUsing the images
- https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/cloud/nocloud_alpine-3.19.1-x86_64-bios-tiny-r0.qcow2
- https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/cloud/nocloud_alpine-3.19.1-x86_64-bios-cloudinit-r0.qc...Using the images
- https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/cloud/nocloud_alpine-3.19.1-x86_64-bios-tiny-r0.qcow2
- https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/cloud/nocloud_alpine-3.19.1-x86_64-bios-cloudinit-r0.qcow2
with [an Ansible playbook that also crafts cloud-init images](https://github.com/christianb93/ansible-samples/tree/master/libvirt) leads to a running domain, which will not continue after preboot. It forever hangs at:
```
Loading vmlinuz-virt... ok
Loading initramfs-virt...ok
```
### Screenshots
![grafik](/uploads/3cee10f62c915d069d821dc4cbc2e123/grafik.png)
![grafik](/uploads/a56cceb691b27d7e42379e193806a738/grafik.png)
### Details
<details><summary>XML</summary>
```xml
<domain type='kvm' id='9'>
<name>alpine0</name>
<uuid>0ba7aa97-3342-48b3-b596-9a84898cd0f4</uuid>
<memory unit='KiB'>1000448</memory>
<currentMemory unit='KiB'>1000000</currentMemory>
<vcpu placement='static'>1</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-mantic'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>EPYC-Rome</model>
<vendor>AMD</vendor>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='cmp_legacy'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='virt-ssbd'/>
<feature policy='require' name='lbrv'/>
<feature policy='require' name='tsc-scale'/>
<feature policy='require' name='vmcb-clean'/>
<feature policy='require' name='pause-filter'/>
<feature policy='require' name='pfthreshold'/>
<feature policy='require' name='v-vmsave-vmload'/>
<feature policy='require' name='vgif'/>
<feature policy='require' name='svme-addr-chk'/>
<feature policy='require' name='rdctl-no'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='require' name='mds-no'/>
<feature policy='require' name='pschange-mc-no'/>
<feature policy='disable' name='xsaves'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/user/src/github.com/christianb93/ansible-samples/libvirt/.state/pool/alpine0' index='2'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/home/user/src/github.com/christianb93/ansible-samples/libvirt/.state/alpine0.qcow2'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/user/src/github.com/christianb93/ansible-samples/libvirt/.state/pool/cloud-config.iso' index='1'/>
<backingStore/>
<target dev='vdb' bus='sata'/>
<readonly/>
<alias name='sata0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='sata' index='0'>
<alias name='sata0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:03:6d:f3'/>
<source network='ansible' portid='b8856b89-1c67-4aae-9ec6-d476d6fe1909' bridge='ansible-bridge'/>
<target dev='vnet8'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/8'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/8'>
<source path='/dev/pts/8'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor' relabel='yes'>
<label>libvirt-0ba7aa97-3342-48b3-b596-9a84898cd0f4</label>
<imagelabel>libvirt-0ba7aa97-3342-48b3-b596-9a84898cd0f4</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+64055:+105</label>
<imagelabel>+64055:+105</imagelabel>
</seclabel>
</domain>
```
</details>https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/157Document what "nocloud" cloud images are2024-03-24T18:30:56ZVincent LegollDocument what "nocloud" cloud images areCould you add a small explanation about what those image variants are
on the [cloud images page](https://alpinelinux.org/cloud/)
The other variations are explained:
> Image variants are assembled for cloud-valid combinations of CPU arch...Could you add a small explanation about what those image variants are
on the [cloud images page](https://alpinelinux.org/cloud/)
The other variations are explained:
> Image variants are assembled for cloud-valid combinations of CPU architecture (aarch64, x86_64),
> firmware (BIOS/UEFI), and instance bootstrap systems (Tiny Cloud and cloud-init).
A link to a more in-depth documentation of those "nocloud" images would be really nice. I don't know if such doc exists,
My personnal question wrt the "nocloud" images, is: "are they suitable for PCIe (or USB) device passthrough", which translates to "do they have all the drivers for *real hardware* instead of just the virtual ones".https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/156Cloud-init disk setup depends on lsblk which is missing2024-03-05T22:49:05ZNilsCloud-init disk setup depends on lsblk which is missing`lsblk` is not installed by default in the cloud-init image. This results in cc_disk_setup.py to fail, because it depends on it: https://github.com/canonical/cloud-init/blob/main/cloudinit/config/cc_disk_setup.py#L260C9-L260C18`lsblk` is not installed by default in the cloud-init image. This results in cc_disk_setup.py to fail, because it depends on it: https://github.com/canonical/cloud-init/blob/main/cloudinit/config/cc_disk_setup.py#L260C9-L260C18https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/issues/55deprecate/remove vnic_eth_hotplug (etc.)2024-01-27T18:48:08ZJake Buchholz Göktürkdeprecate/remove vnic_eth_hotplug (etc.)The Alpine aports APKBUILD is still configuring `vnic_eth_hotplug` module, but it's no longer installed.
With the switch to `dhcpcd`, this is no longer needed, and we should be able to remove it, `imds-net-sync`, and maybe even `assembl...The Alpine aports APKBUILD is still configuring `vnic_eth_hotplug` module, but it's no longer installed.
With the switch to `dhcpcd`, this is no longer needed, and we should be able to remove it, `imds-net-sync`, and maybe even `assemble-interfaces`.https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/issues/54404 is okay if there's no user-data set2024-01-27T18:45:15ZJake Buchholz Göktürk404 is okay if there's no user-data settreat 404 when trying to get user-data as success -- it wasn't set.treat 404 when trying to get user-data as success -- it wasn't set.https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/146Include tiny-cloud-alpine for nocloud image2024-01-13T21:50:40ZNatanael CopaInclude tiny-cloud-alpine for nocloud imageIt would be nice if we could add tiny-cloud-alpine package to the nocloud qcow image. That would add support for `#alpine-config` out of the box, in addition to `#!/bin/sh` scripts.It would be nice if we could add tiny-cloud-alpine package to the nocloud qcow image. That would add support for `#alpine-config` out of the box, in addition to `#!/bin/sh` scripts.https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/issues/49can't access IMDS? this should be fatal2023-12-11T21:47:42ZJake Buchholz Göktürkcan't access IMDS? this should be fatalwith the current issues with #48, we're also seeing the following...
```
* Tiny Cloud - early phase ...
++ save_userdata: starting
wget: can't connect to remote host (169.254.169.254): Network unreachable
++ save_userdata: done
[...with the current issues with #48, we're also seeing the following...
```
* Tiny Cloud - early phase ...
++ save_userdata: starting
wget: can't connect to remote host (169.254.169.254): Network unreachable
++ save_userdata: done
[ ok ]
* Tiny Cloud - main phase ...
++ unknown_userdata: starting
unknown_userdata: unable to process 'unknown' user-data
++ unknown_userdata: done
++ create_default_user: starting
create_default_user: already exists
++ create_default_user: done
++ set_hostname: starting
wget: can't connect to remote host (169.254.169.254): Network unreachable
set_hostname: no hostname set
++ set_hostname: done
++ set_ssh_keys: starting
wget: can't connect to remote host (169.254.169.254): Network unreachable
set_ssh_keys: no ssh key found
++ set_ssh_keys: done
[ ok ]
ssh-keygen: generating new host keys: RSA ECDSA ED25519
* Starting sshd ... [ ok ]
* Tiny Cloud - final phase ...
++ bootstrap_complete: starting
++ bootstrap_complete: done
[ ok ]
```https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/issues/47Discussion: should we use setup-* scripts for some operations2023-10-17T09:03:29ZGuido TrotterDiscussion: should we use setup-* scripts for some operationsFor some lib/tiny-cloud/user-data/alpine-config operations there are equivalent setup scripts in the setup-alpine repository. The most straightforward examples are:
apk/cache: setup-apkcache
ntp/enabled+ntp_client: setup-ntp
But we cou...For some lib/tiny-cloud/user-data/alpine-config operations there are equivalent setup scripts in the setup-alpine repository. The most straightforward examples are:
apk/cache: setup-apkcache
ntp/enabled+ntp_client: setup-ntp
But we could also consider it for user creation+ssh key (setup-user) and root-ssh-key (setup-sshd).
Would it make sense to look into using those commands where we can (since this is anyway about configuring alpine linux) in order to stay as much in line with a standard alpine installation as done by setup-alpine as possible, instead of potentially diverging?
I wanted to discuss pros and cons in an issue before looking into code.
Thanks,
GuidoNatanael CopaNatanael Copahttps://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/issues/35Use nocloud yaml provider for autoinstall2023-05-31T00:04:36ZNatanael CopaUse nocloud yaml provider for autoinstallWould be nice if we could use tiny-cloud-nocloud as the base for headless/unattended installer.
The idea is to make the alpine iso images look for `LABEL=cidata` and enable tiny-cloud-nocloud if found.
Then users could simply add a `se...Would be nice if we could use tiny-cloud-nocloud as the base for headless/unattended installer.
The idea is to make the alpine iso images look for `LABEL=cidata` and enable tiny-cloud-nocloud if found.
Then users could simply add a `seed.iso` `user-data` `#cloud-config` yaml file to perform autoinstall.
The auto installer should be able to:
- [x] set local hostname before network setup (init-early) so its available for DHCP client
- [x] set hostname (init-main) after network is up
- [x] configure network
- [ ] create users
- [ ] add ssh keys for created users
- [x] add ssh keys for default user (https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/merge_requests/37)
- [x] enable sshd
- [x] install packages
- [x] `runcmd` to run commands
Future versions could also have support for setting up disk.https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/132Make a single "Hybrid MBR" Image Instead of Separate BIOS/UEFI2022-10-31T20:54:38ZJake Buchholz GöktürkMake a single "Hybrid MBR" Image Instead of Separate BIOS/UEFI`gptfdisk` would be the tool to do this.
In-depth analysis of the pros/cons -- https://www.rodsbooks.com/gdisk/hybrid.html
The issue that either MBR or GPT misses out on any partition changes is mostly moot in a cloud environment. It ...`gptfdisk` would be the tool to do this.
In-depth analysis of the pros/cons -- https://www.rodsbooks.com/gdisk/hybrid.html
The issue that either MBR or GPT misses out on any partition changes is mostly moot in a cloud environment. It would be extremely rare if an instance is booted with one, and then switched to another later in its lifetime.
While this is an opportunity to also address the ongoing #90 saga, if that turns out to be a bottomless rabbit hole, using `grub2` for UEFI would be good enough in the interim.https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/130Support Bare Metal Cloud Images2024-03-24T18:25:51ZJake Buchholz GöktürkSupport Bare Metal Cloud ImagesThis will probably involve using `linux-lts` instead of `linux-virt`.This will probably involve using `linux-lts` instead of `linux-virt`.https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/90Bootload Directly via EFI_STUB?2023-04-23T23:31:13ZJake Buchholz GöktürkBootload Directly via EFI_STUB?*Created by: tomalok*
(As reported in https://github.com/mcrute/tiny-ec2-bootstrap/issues/15 by @tsarna)
Problem is that the bootloader on **x86_64** is installed on the raw EBS volume (allowing `resize2fs` to expand to the full volume...*Created by: tomalok*
(As reported in https://github.com/mcrute/tiny-ec2-bootstrap/issues/15 by @tsarna)
Problem is that the bootloader on **x86_64** is installed on the raw EBS volume (allowing `resize2fs` to expand to the full volume size) vs. a partition on **aarch64**, which restricts `resize2fs` to the size of root partition, as defined by the AMI image.
Possible solutions to be applied here (instead of https://github.com/mcrute/tiny-ec2-bootstrap)...
* install `grub` bootloader on raw EBS volume instead of partitioning (if that's even possible)
* perhaps switch to `u-boot` bootloader. There is apparently some compatibility with `syslinux` -- https://linux-sunxi.org/U-Boot#Booting_with_extlinux.conf -- switching would likely reduce the footprint of the **aarch64** AMI.https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/125AWS Marketplace2021-12-27T17:37:39ZMike Crutemike@crute.usAWS Marketplace*Created by: oxr463*
> ...Ideally I'd say that we should not ever delete and AMI but since I'm personally funding this I'd like to strike a balance between supporting our users and not incurring unbounded cost.
> --https://github.com/m...*Created by: oxr463*
> ...Ideally I'd say that we should not ever delete and AMI but since I'm personally funding this I'd like to strike a balance between supporting our users and not incurring unbounded cost.
> --https://github.com/mcrute/alpine-ec2-ami/issues/23#issuecomment-635713152
Why not list these in the marketplace?Mike Crutemike@crute.usMike Crutemike@crute.ushttps://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/124Support Upstream CI2021-12-27T17:36:32ZMike Crutemike@crute.usSupport Upstream CIAlpine is running a [GitLab instance](https://gitlab.alpinelinux.org/) that supports CI, we should consider migrating to it at some point. This will require updates to the identity broker to support the GitLab CI authentication model. As...Alpine is running a [GitLab instance](https://gitlab.alpinelinux.org/) that supports CI, we should consider migrating to it at some point. This will require updates to the identity broker to support the GitLab CI authentication model. As of right now it seems that there's [not much support](https://docs.gitlab.com/ee/api/README.html#gitlab-ci-job-token) for using the tokens they provide to verify job authentication in their API so we may need to be clever or wait for them to update that.
Requested from alpine-ec2-ami#22 https://gitlab.alpinelinux.org/alpine/cloud/alpine-cloud-images/-/issues/23Document update and retention policy2023-10-14T19:47:07ZMike Crutemike@crute.usDocument update and retention policyWe may want to add links in the README (or in a separate document) for publicly-available historical AMIs. Probably not a bad idea to clearly state retention policy for old AMIs, too.We may want to add links in the README (or in a separate document) for publicly-available historical AMIs. Probably not a bad idea to clearly state retention policy for old AMIs, too.