<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>unraid on Luis Logs</title>
    <link>https://luislogs.com/tags/unraid/</link>
    <description>Recent content in unraid on Luis Logs</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 13 May 2023 18:40:30 +0900</lastBuildDate><atom:link href="https://luislogs.com/tags/unraid/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Install K3s with Calico and disable Traefik</title>
      <link>https://luislogs.com/posts/install-k3s-with-calico-and-disable-traefik/</link>
      <pubDate>Sat, 13 May 2023 18:40:30 +0900</pubDate>
      
      <guid>https://luislogs.com/posts/install-k3s-with-calico-and-disable-traefik/</guid>
      <description>For those who want to use Calico with k3s instead of Flannel, I am sharing here the steps I followed. For the project I am working on, I had to re-install k3s with Calico upon learning that flannel works on layer-2, whereas I need layer-3 routing for BGP. I am not sure if there’s a way to advertise BGP with flannel as the CNI but it looks like Calico already runs it natively.</description>
      <content:encoded><![CDATA[<p>For those who want to use Calico with k3s instead of Flannel, I am sharing here the steps I followed. For the project I am working on, I had to re-install k3s with Calico upon learning that flannel works on layer-2, whereas I need layer-3 routing for BGP. I am not sure if there’s a way to advertise BGP with flannel as the CNI but it looks like Calico already runs it natively. That should reduce any additional configuration that might be required going further if your project also requires BGP routing.</p>
<p>Uninstall k3s
To uninstall execute the following as root:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">/usr/local/bin/k3s-uninstall.sh
</span></span><span class="line"><span class="cl">rm -rf /var/lib/rancher
</span></span></code></pre></div><h3 id="reinstall-k3s">Reinstall k3s</h3>
<p>Execute:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">curl -sfL https://get.k3s.io <span class="p">|</span> <span class="nv">INSTALL_K3S_EXEC</span><span class="o">=</span><span class="s2">&#34;--flannel-backend=none --disable-network-policy --cluster-cidr=192.168.0.0/16&#34;</span> sh -s - --docker
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_2.png" alt="Install k3s with calico">
  </p>
<p>Copy the k3s yaml file to your home directory to allow remote access. Ensure correct ownership:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">mkdir /home/luis/.kube/ <span class="c1">#(Only required if you don&#39;t have the .kube directory in your home folder yet)</span>
</span></span><span class="line"><span class="cl">cp  /etc/rancher/k3s/k3s.yaml /home/luis/.kube/config
</span></span><span class="line"><span class="cl">chown -R luis:luis /home/luis/.kube/
</span></span></code></pre></div><p>Check nodes:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl get node
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_3.png" alt="Install k3s with calico">
  </p>
<p>Get the token of your master node:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">cat /var/lib/rancher/k3s/server/node-token
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_4.png" alt="Install k3s with calico">
  </p>
<p>Use this token and the IP of your master node in the installation command to be executed in your worker nodes. Execute as root:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">curl -sfL https://get.k3s.io <span class="p">|</span> <span class="nv">K3S_URL</span><span class="o">=</span>https://serverip:6443 <span class="nv">K3S_TOKEN</span><span class="o">=</span>mytoken sh -s - --docker
</span></span></code></pre></div><p>Execute this in the worker node as root.</p>
<p>You will get the error:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused – did you specify the right host or port?</p>
</blockquote>
<p>This is because flannel was disabled and there is no CNI running.</p>
<p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_5.png" alt="Install k3s with calico">
  </p>
<p>And if you try to check pods from the master node the status will only be in ContainerCreating:</p>
<p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_6.png" alt="Install k3s with calico">
  </p>
<h3 id="install-calico">Install Calico</h3>
<p>From here you will need to install calico. To do so execute:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_7.png" alt="Install k3s with calico">
  </p>
<p>Then install the required custom resources:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml
</span></span></code></pre></div><p>Now check the pods</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl get pod -o wide --all-namespaces
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_8.png" alt="Install k3s with calico">
  </p>
<p>You will notice that some traefik containers are in error or CrashLoopBackOff state. I am not sure why but I don’t really need it. To disable Traefik:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl -n kube-system delete helmcharts.helm.cattle.io traefik traefik-crd
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_9.png" alt="Install k3s with calico">
  </p>
<p>Stop k3s services.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo systemctl stop k3s
</span></span><span class="line"><span class="cl">sudo systemctl status k3s
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_10.png" alt="Install k3s with calico">
  </p>
<p>Modify below the k3s service configuration file:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo vi /etc/systemd/system/k3s.service
</span></span></code></pre></div><p>And add the following line:</p>
<p><code>'--disable=traefik' \</code></p>
<p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_11.png" alt="Install k3s with calico">
  </p>
<p>Reload the service and delete the Traefik yaml file:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo systemctl daemon-reload
</span></span><span class="line"><span class="cl">sudo rm /var/lib/rancher/k3s/server/manifests/traefik.yaml
</span></span></code></pre></div><p>Start k3s:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo systemctl start k3s
</span></span><span class="line"><span class="cl">sudo systemctl status k3s
</span></span></code></pre></div><p>Check nodes</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">kubectl get node -o wide
</span></span></code></pre></div><p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_12.png" alt="Install k3s with calico">
  </p>
<p>Check the pods regularly.</p>
<p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_13.png" alt="Install k3s with calico">
  </p>
<p>You might notice that some calico pods are in crashloopback state. Check again after a few minutes and they should be in running state</p>
<p>
    <img src="/posts/install-k3s-with-calico-and-disable-traefik/20230513_14.png" alt="Install k3s with calico">
  </p>
<p>K3s with Calico should be running fine now at this point!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Moving files within Unraid</title>
      <link>https://luislogs.com/posts/moving-files-within-unraid/</link>
      <pubDate>Tue, 28 Mar 2023 18:40:30 +0900</pubDate>
      
      <guid>https://luislogs.com/posts/moving-files-within-unraid/</guid>
      <description>Moving files within Unraid One of the first questions I asked myself is how can I transfer files from an external drive to one of the shares in my array. This is so I can use that external SSD as another cache pool in my Unraid setup. And SpaceInvaderOne saves the day again. I summarized his video below. Follow at your own risk or better yet just watch SpaceInvaderOne’s video here.</description>
      <content:encoded><![CDATA[<p>Moving files within Unraid
One of the first questions I asked myself is how can I transfer files from an external drive to one of the shares in my array. This is so I can use that external SSD as another cache pool in my Unraid setup. And SpaceInvaderOne saves the day again. I summarized his video below. Follow at your own risk or better yet just watch SpaceInvaderOne’s video here.</p>
<p>In bullet points:</p>
<ul>
<li>All disks and shares can be found in /mnt/.</li>
<li>/mnt/disk1/ is location of disk1.</li>
</ul>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_1.png" alt="alt text">
  </p>
<ul>
<li>/mnt/user0 contains all of the shares but only those of which are in the array and not the cache.</li>
<li>Shares are spread across the disks.</li>
<li>All shares on unRAID can be found in /mnt/user. Try to click on any of the share and it will show you the path.</li>
<li>If you mount an external drive from the Unassigned Devices, the path will be in /mnt/disks/</li>
<li>If you try to mount a remote share (e.g. a SMB share from another unraid system), the path will be in /mnt/remotes/.</li>
<li>There are different tools which can be used for data transfer e.g. Krusader, rsync, remote shares, or by using the cli.</li>
</ul>
<h3 id="how-to-use-krusader">How to use Krusader</h3>
<p>Install binhex krusader.</p>
<p>Delete /media</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_2.png" alt="alt text">
  </p>
<p>We will create three different folders. One to access local shares, one for unassigned disks, and another one for remote shares.</p>
<p>On the bottom part click ‘Add another path, port, variable, label or device.’</p>
<p>Follow as below:</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_3.png" alt="alt text">
  </p>
<p>Add another one as below:</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_4.png" alt="alt text">
  </p>
<p>And another one as below:</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_5.png" alt="alt text">
  </p>
<p>Click Apply and wait for installation to complete.</p>
<p>Go to Docker &gt; Krusader &gt; WebUI</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_6.png" alt="alt text">
  </p>
<p>Go through the wizard by clicking on OK.</p>
<p>On both the left and right side panel go to the / directory. And from here add a profile called ‘startup’.</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_7.png" alt="alt text">
  </p>
<p>Go to Settings &gt; Configure Krusader &gt; Startup profile &gt; choose startup. Apply.</p>
<p>
    <img src="/posts/moving-files-within-unraid/unraid_8.png" alt="alt text">
  </p>
<p>Now you can navigate to your directories and copy or move files across the panels. One good thing to note is even if you close the browser, the copying or moving of files will continue as long as the docker application is running. For attaching external drives via USB, you will have to mount them on the ‘Main’ tab before it appears in /mnt/disks/ directory.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>New NAS and Homelab setup using Unraid</title>
      <link>https://luislogs.com/posts/new-nas-and-homelab-setup-using-unraid/</link>
      <pubDate>Sat, 18 Mar 2023 18:40:30 +0900</pubDate>
      
      <guid>https://luislogs.com/posts/new-nas-and-homelab-setup-using-unraid/</guid>
      <description>Just very recently I managed to assemble my second NAS — a beefed up version of my first one back in 2012. I decided to settle on an 11th-gen Intel system.
Motherboard: Asrock H570M-ITX/AC CPU: Intel Core i7-11700 RAM: Kingston Fury DDR4 3200MT/s 16GB x 2 Drives: 2x 8 TB WD Red Plus, 2x 1TB Samsung 980 NVMe, 1x 500GB Samsung 860 EVO PSU: Fractal Design ION SFX-L 500W 80PLUS Gold Case: Jonsbo N1 Below you can find the steps I followed to install unRAID v6.</description>
      <content:encoded><![CDATA[<p>Just very recently I managed to assemble my second NAS — a beefed up version of my first one back in 2012. I decided to settle on an 11th-gen Intel system.</p>
<ul>
<li>Motherboard: Asrock H570M-ITX/AC</li>
<li>CPU: Intel Core i7-11700</li>
<li>RAM: Kingston Fury DDR4 3200MT/s 16GB x 2</li>
<li>Drives: 2x 8 TB WD Red Plus, 2x 1TB Samsung 980 NVMe, 1x 500GB Samsung 860 EVO</li>
<li>PSU: Fractal Design ION SFX-L 500W 80PLUS Gold</li>
<li>Case: Jonsbo N1</li>
</ul>




	




































  	
	

	
		<script src="/shortcode-gallery/jquery-3.7.0.min.js"></script>
	
	
	
		<script src="/shortcode-gallery/lazy/jquery.lazy.min.js"></script>
	

	<script src="/shortcode-gallery/swipebox/js/jquery.swipebox.min.js"></script>
	<link rel="stylesheet" href="/shortcode-gallery/swipebox/css/swipebox.min.css">

	<script src="/shortcode-gallery/justified_gallery/jquery.justifiedGallery.min.js"></script>
	<link rel="stylesheet" href="/shortcode-gallery/justified_gallery/justifiedGallery.min.css"/>


<style>
	

	
</style>





<div id="gallery-21f3079619a363458d8ea513db596d29-0-wrapper" class="gallery-wrapper">
<div id="gallery-21f3079619a363458d8ea513db596d29-0" class="justified-gallery">
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_1.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="450" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAGAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/AOetPDcUOrxJPqke9V3qiqMscdD2H49aypYNRg1Rri63BoSBjeSygdMHr24NS2&#43;oXGoJEsSrKLcNvZQA4XPUE&#43;melbNzepq&#43;mxWsMQN6hKrcbx8yehHv6cY4PFLbcuyexoaNqFla6zJbXN0kkssai3kGMEHnacdG6ex/Sul8yP1H5ivFLqF7aYgSK4BxuU5Ge4qL7RJ6/pVOF9TncLs0NLma2lMsbGOZRlT/AHq63TNGEmkyz4NrG6jeR98E87sdx&#43;ny&#43;xrH1LQGhvJYnkMbH5om7N6GrLeKmXS4oSJTeQjy5kbkMoHOcdiQPoee3Keq0NmmtDQl8O21xpu65kiQW5WMbOWdR2OO4HT1qj/wjeg/37v8j/hWppVwk0tqmpXEMFvMplEJfYAACQCewz2PX9DuY8Of897D/wACEpJ20NOW&#43;5//2Q=="
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_1_hu502a9500bb1a9b943faea08ca89d4aac_245901_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_2.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="450" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAGAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/AOd0/XZ9qwresdkUiW/8AVhjaO3UVfhmn1DSJlmZ2udrDc4O4DPHX2NcKkZMfmA8jgqehrt9H1Sx0xbgXUMkuWJTy03EjGD&#43;GBRO7dxw7HJ6hpqbVuY22ngkDpVHD/8APT&#43;daWuXE0czJEGjgkJIR1AYA8gH0PNY3mv6mnFaCle5ftSzSPCkTSs74VF6nNd34ethpVzHE8QeaTH71WJUN/EB6ds9/wAMVxeg3/2O&#43;YhF3OBhyMleecCvQ9MvLQ6e9yv7zafuggMWA4HsfSplcqDSepQ&#43;JGnRsks7Li4DgjH8S44H4V5r5cn92u41y4a&#43;kuZXuBIjBQq5PyLtAI9Mda537FY/3v8Ax4URdkOSP//Z"
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_2_hub2c550302bac5fd45f7929f1d478f373_229477_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_3.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="450" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAGAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/AOS8OOZp40RG8lhk5O4DHBB6YBGBz6D1q5q3hWO1D31m4mixhwhyYmPr7e/r&#43;FdBoVhp9hstoY/lChhuHLH1z3q9dvZWF008s6rCIylwpHylSOAfU&#43;1Kxa8zxu7SSO4YSMWOeCe9Q7q6jxBp8U6C7tLe4it5ctCZ0wSP6/59a577Dcf882/75NVF6EtHY6R4mWW0EshcXcY8vZGuTLn07c45z&#43;FdBpthc3bLPqkaYfmOFhlYwPX1NcDpGqQ2OwhHWRG3CRSDg&#43;69x17jrXdadrUOoxQ3UG4bTiRB95Tjp&#43;PBB9u1JiS7nQzQw3kD2GoRGS3mG0MBynpj0qh/wgPhz/npc/8AfysXXvFzQo1tp7KZhwzg8J7D1P8AKuc/4SrX/wDn8b/vs1LLV7H/2Q=="
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_3_hu85637e8f45c63fbf3d6468419f54d66f_209538_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_4.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="450" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAGAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/ALVt4vghtI4ptPkjMKLHzIOSB7DHakl8dRpJGn9jynIBUmcAH8cda46xuplNyLhykzgq6bATnjsehH9O9WLSeO5kkt5rgW8bMdrMoERbJ&#43;V8cjPrkUnFEqETcvvFBu55Zf7NeMsBgGUHGBj0rO/t2f8A59//ACJ/9aqdyvm5TT38uRflltpiWKt6q/dTjv8AnzVT7HrH9yL/AL6T/wCKotYqNOJUv9Fv4y1xNIOuWEeePw4q5o9xGuk3cjhHlBYuX5LDgn&#43;tdBPbS3Ez&#43;aI4ZD8zLNIFwT169f8APSucuvD1y8l3LYzRHYMSRoTzkduMGlzp9S5Qa&#43;EpQ6pdg&#43;SHZgAW2j9fwqT&#43;1Jv7h/I1St7GeWceWreachgnRexBJ6Vc/sS8/wCebf8Af6qbSErn/9k="
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_4_hu6f80a1014ee310c05e5be6e7e297d998_240949_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_5.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="450" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAGAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/APMtN8O3&#43;pL5qoILcDJnmO1APX3rahsNL0q1ku4bV9UaL71w64hU&#43;w/i/wA81oXcWp29qtxqOnvfGcgwZVvKjAyT8g79OfSp4/Dt1rKQzapdRwxnlIYkGEU8n0/H&#43;dJtlpdjJ0HxHfw&#43;KoLm6nKRpgNEV2KFJHQfkfXivSv&#43;E007/n5j/OvO/FGjpawiJJkkniUYJIJZBnn8f8a5TyZvb86Vkx3aPb5PKk0QrJfXN3M43qjxhhEeeMgAnIwNuSax726uLeZbdFltZsDzAgV5jgcAMOFz1J657jFchpHiaOw0q8tpY2uQcsnmrvI3cEc8fj7nikvvFF9fWyRSyYdBj92cHZ2DMenPc880WHzF/WoGKOTKsV0ASGklMjsPRjzz/nNc9tv/APn9j/77qld6ip4DCZv7vRB/Vj9ePaqn29/&#43;eFv/AN&#43;hVEtn/9k="
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_5_hu1ef7542de73363a875a7dfa6a4194309_664266_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
		
		
				
			
			
			
				
			

			
			
				
					
				
			


			
			
			

			
			


			<div>
				
				
					
				
				<a href="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_6.jpg" 
					class="galleryImg"
					
						

						
							data-description=" + <br/>mm f/ sec ISO "
						

						
					
					>
					<img			
						width="380" height="600"

						
							
							style="filter: blur(25px);"
							
								src="data:image/jpeg;base64,/9j/2wCEAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4&#43;JS5ESUM8SDc9PjsBCgsLDg0OHBAQHDsoIig7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7O//AABEIACAAFAMBIgACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8/T19vf4&#43;fr/2gAMAwEAAhEDEQA/AM6fUbKa/aJ28mNWU5WMIAMHoPxretdM8M3dnEkN08LxqAcHLPjvjpXnr3K3JEo7gAj0OK0/D0xiecqcEsMn2xR0KWrs0ejrKz7mRztLHGRjil3yf89K56PxZZ2cawSwXDso&#43;8kQYHPPXcP5U/8A4TXT/wDn2uv/AAHH/wAXWfKzBp32OK1gWUd9tsFxFsGcdCcnNMs5fKtJ2BwTIB&#43;lZ&#43;p/a7KbLxiSMceYpyD/AIU60llvLdoolKKzZZz24xge9WtjoesmJM2rPKZIDOI25XaTj0/pUf8AxO/W5/M1qpAY0Chm4GODS&#43;W395vzquaRXs4n/9k="
							
							class="lazy"
							data-src="/posts/new-nas-and-homelab-setup-using-unraid/images/unraid_6_hub08adc2356c684ea6288757b7e3d889a_172267_600x600_fit_q90_lanczos.jpg"
						

						
							
								
							
						
					>
				</a>
			</div>
		
	
</div>
</div>

<script>
	if (!jQuery) {
		alert("jquery is not loaded");
	}

	$( document ).ready(() => {
		const gallery = $("#gallery-21f3079619a363458d8ea513db596d29-0");
		

		
		let swipeboxInstance = null;

		
		
		gallery.on('jg.complete', () => {
			
				
				
				$(() => {
					$('.lazy').Lazy({
						visibleOnly: true,
						afterLoad: element => element.css({filter: "none", transition: "filter 1.0s ease-in-out"})
					});
				});
			

			swipeboxInstance = $('.galleryImg').swipebox(
				jQuery.extend({},
					{  }
				)
			);
		});

		
		gallery.justifiedGallery({
			rowHeight : "150",
			margins : "5",
			border : 0,
			randomize :  false ,
			waitThumbnailsLoad : false,
			lastRow : "justify",
			captions : false,
			
			
		});

		
		
	});
</script>

<p>Below you can find the steps I followed to install unRAID v6.11.5 with the help of <a href="https://youtu.be/CcRwT7iHIcc">SpaceInvaderOne’s video</a>. Of course only follow at your own risk. I would even advise you to go to SpaceInvaderOne’s youtube channel and watch his videos instead.</p>
<h3 id="preparation">Preparation</h3>
<ol>
<li>Create a bootable USB with the Unraid USB creator and allow UEFI boot since there is no external GPU.</li>
<li>Change boot order to have USB as first priority.</li>
<li>Enable VD-T and all other virtualization settings in BIOS.</li>
<li>Disable built-in RAID function in BIOS.</li>
<li>Plug server to router and access using kamata.local (name-of-server.local)</li>
<li>Change to a static IP address by going under Settings &gt; Network Settings.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_7.webp" alt="alt text">
  </p>
<ol start="7">
<li>Go to plugins tab and install community plugins manager. You can get this from Unraid forum &gt; Plugin support &gt; <a href="https://forums.unraid.net/topic/38582-plug-in-community-applications/#comments">[Plug-In] Community Applications</a>. Acquire the URL under the INSTALLATION section.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_8.webp" alt="alt text">
  </p>
<ol start="8">
<li>Go back to Installed Plugins and click on the icon of Community Applications</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_9.webp" alt="alt text">
  </p>
<ol start="9">
<li>Go to Plugins &gt; Preclear Disk icon. Preclear only the mechanical disks and not NVMe or SSDs. Below the first one has already been started. You can leave everything else in default or you can just skip pre-read to save on time. My 8TB drives took about 13 hours to complete the zeroing process and another 13 hours to complete post-read.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_10.webp" alt="alt text">
  </p>
<ol start="10">
<li>Once preclear is done, you can create the array.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_11.webp" alt="alt text">
  </p>
<h3 id="creating-the-array">Creating the array</h3>
<p>It’s really surprising how easy it is to create the array in just a few clicks.</p>
<ol>
<li>In the Main tab, assign one of the mechanical disks as parity but do note the parity drive should always be equal or bigger than any of the other drives in your array. Assign another drive as disk 1.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_12.webp" alt="alt text">
  </p>
<ol start="2">
<li>Start the array and confirm the warning message prompt.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_13.webp" alt="alt text">
  </p>
<ol start="3">
<li>You will see that the disks are unmountable. This is because they haven’t been formatted yet. Go to the bottom part and formal all the disks. Ensure those are only disks part of the new array.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_14.webp" alt="alt text">
  </p>
<ol start="4">
<li>Once formatting is complete, the parity-sync will start automatically. This will take time depending on the drive capacity. As for my case it took around another 13 hours. Do note that you don’t have to wait for parity to complete before you can start using the array. You can already upload files at this point but you will not have redundancy not unless parity is complete.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_15.webp" alt="alt text">
  

    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_16.webp" alt="alt text">
  </p>
<ol start="5">
<li>When parity is completed you will see a green dot on the left side of device name. You can also scroll down and you should see a text showing parity is valid. At this point anything you upload into the disk will have redundancy. Just remember that parity needs to be calculated every time you modify something in the disk.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_17.webp" alt="alt text">
  </p>
<h3 id="creating-a-cache-pool">Creating a cache pool</h3>
<p>Now that you have the array running with parity-sync complete, you can start configuration of the cache pool. Here you will see that I still have my NVMe drives under unassigned devices. I plan have them running on RAID1 as well – same as the array.</p>
<ol>
<li>Stop the array so you can add a cache pool. I named it cache_protected to remind me this cache has redundancy (any name will do). Click on Add Pool and assign 2 slots. Select the one drive for each of the slot.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_18.webp" alt="alt text">
  </p>
<ol start="2">
<li>You can click on the name of the cache pool to see the settings. I kept mine on default. Start the array. It will show the the disks again are unmountable so you just have to format them the same way it was done for array disks.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_19.webp" alt="alt text">
  </p>
<ol start="3">
<li>Cache pool creation will not take so long. When that’s done the second device under Pool Devices will have a text saying that it is part of a pool.</li>
</ol>
<p>
    <img src="/posts/new-nas-and-homelab-setup-using-unraid/unraid_20.webp" alt="alt text">
  </p>
<p>This is the final step to running your array with a cache pool. If you want to move files into any of the shares you can use a docker application called Krusader. I just tried this today but haven’t had more time to explore so I am parking this for now.</p>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
