Update triggered with the tool mintupgrade “sudo mintupgrade check”, if not present “apt install mintupgrade”
Was a bit scared when the Fonts were upgraded and the app names got crypted, but that was fixed during the update. You need to have a system snapshot with timeshift in order to perform the update. Update was performed on an AMD System 5950x with a RTX 3080 with encrypted disk. Took half an hour, but thats fine.
I think there will be an official guide in the next few days, but for me was smooth enough without the exception that my cable network needed to be added manually after the upgrade and howdy does not work anymore.
To make it short: I’m now using an existing opensource tool for VSCode which can point to my local server. It’s called open copilot and is licensed under Apache 2 which allows commercial use.
Once setting up the url for your local server you can query the extension “cody” with the icon on the left sidebar or within inline code.
In this video I’ll show you how smoth the inplace update to Windows Server 2022 could be. The goal will be to finally enable GPU-P for my virtual machines (currently this is still not working though).
To put the Server to good use i will use it to create a VM Test Environment via script. What the script does is to create a copy of the existing VMs (their VHDs) with a prefix and attach them to a private network. With this approach it is easy to test software updates etc.
The script takes two parameters: The prefix of the new environment and the location of the template VMs.
After installing the new Epyc 7443P, my server is now fully build. I’ve startet with a first gen Epyc and this is now the inventory with the main parts:
Chassis is a Nanoxia Deep Silence 6 Rev B (i can really recommend it)
Asrockrack ROMED8-2T Mainboard
AMD Epyc 7443P
8x Kingston Server Premier 32GB DDR4-3200 CL22 (KSM32RD4/32MEI)
2x ASUS Hyper M.2 x16 Gen 4 (with each 4x 1TB 970 Evo Plus NVMe) total of 8TB NVMe in a 4 Column Storage Spaces mirror configuration
(6+1)x Seagate Exxos X16 & (12 +1) x Samsung 860 Evo in a tiered storage 3 column mirror configuration. Each tier has a hot spare.
Microsemi Adaptec HBA 1100-16i and Broadcom HBA 9405W-16i as Storage controllers
SSD Enclose from ICY Dock (ICY DOCK ToughArmor MB998IP-B, ICY DOCK ToughArmor MB516SP-B)
Total of 10x 14cm Arctic ARCTIC F14 PWM PST CO Fan, but don’t worry i have many fans but the noise level is really low.
1x Intel X550-T2 from the Mainboard and 1x Intel X710-T2L as an expansion card for network connectivity.
Later maybe i will add an a4000 from Nvidia to the last available PCIe Slot. Who knows maybe VGPU will make it to the prosumer market in the future.
In my opinion this is a balanced build and a good example for a hyperconverged server. The Server has the Hyper-V as well as the Storage role. I’m pretty much set for the next 3-5 Years.
Just bought and installed the new AMD EPYC on my Board and its 2x as fast as my previous processor the 7351P. For the processor to run you need to update the Bios of the Board. One bad thing is, that the Board supports only GEN2 and Gen3 EPYCs after the update and not my old one. So in case of a defect of the processor i have to downgrade the BIOS. Currently I’m struggeling a bit to update the BMC of the Board as well. The main reason for that update is, that the FAN controls are managed by the BMC and not by the BIOS. And currently i have no fan control.
Update (30.08.2021): Managed to install BMC Version 1.11 which has Fan-Control. So I’m fine with it although there is currently a version 1.19 available which cannot be installed (stops at the verification step).
I just found those PCIe card from Asus (ASUS Hyper M.2 x16 Gen 4) for about 50€ where you can connect 4 additional NVMe drives to your Mainboard. Its for my Epyc board (ArockRack ROMED8-2T) but it works also with sTRX40 boards. I’ve put 4x Samsung 970 EVO Plus in a Raid 1 configuration. The storage will be used primarily for the VMs and the Databases. For the adapter to work you will need to change the PCIe lane configuration of the slot from 16x to 4x+4x+ 4x+4x.
Here is quick preview of my new Board an AsrockRack ROMED8-2T. I’ve moved again because i needed more resources to host VMs and got a good offer for an Epyc Processor. The board is one of the few that can support PCIe 4 x16 on all ATX slots (7). On the board currently is an AMD EPYC 7351P and 4x Kingston Server Premium Ram. My impression so far: It requires no extraordinary cooling for its components. I’m using a Noctua NH-U14S TR4-SP3 for the CPU and a block of 2 standard 140mm chassis coolers which are located on the rights side of the board. Those are in addition to the 5 coolers already existing on the chassis. The board itself does not require additional cooling, however my SAS and NW-Card do. Without the extra fans the cards reach temperatures of 70°.
I will start experimenting with neuronal networks using Jupyter Notebooks inside the Google Colab envirnonment. This environment brings everything we need to start with machine learning. It is a web based service having an integrated python IDE and has Keras / Tensorflow already included.
When finished with the first examples i will create my own environment to experiment locally.
Today i received a new Drive for my Hyper-converged Server. It is another Seagate Exos X16. I also added another SSD for the caching. Unfortunately this time the HDD-drive did not show up in the Primordial pool and it was not possible to add it to my Storage Pool. When running the Get-PhysicalDisk command the drive also did not show up. However it shows up under the disk section in the Server Manager and i could create a new Volume with it.
I’ve tried a few things like resetting the disk, take it offline format it with GPT etc. with no improvement.
The reason for that could be that the SAS-Controller has a different config for this channel (don’t know why). The difference is, that for this drive, it shows Partitioned = yes and Mounted = true. The picture shows the state when everything is fine.
When connecting the drive to another channel, everything works as expected and i could add the drive to my Pool. This is a little bit strange and i will try to get the reason for that. Also i could not find any configuration options for this.
Once the drive is added to the pool, you can trigger the job to distribute the files evenly between the drives (this takes a full day to complete). The command is:
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkPrivacy policy