tag:blogger.com,1999:blog-77611204783925418742024-03-08T09:56:07.383+02:00Botsikas' BlogTechnological blog publishing tricky IT solutionsAndreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.comBlogger102125tag:blogger.com,1999:blog-7761120478392541874.post-8239089373290997852024-01-05T16:58:00.000+02:002024-01-05T16:58:01.461+02:00LogonCommand doesn't execute in windows sandbox<p>I was trying to use <a href="https://learn.microsoft.com/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-configure-using-wsb-file" target="_blank">Windows Sandbox</a> to run <a href="https://botsikas.blogspot.com/2024/01/running-ubiquiti-unifi-network-server.html" target="_blank">UniFi server</a> and I wanted to install VC140 as a logon command which seemed to not work 🤔 I was mounting the <i>Desktop </i>folder where I had both the executable and the <b><i>startup.cmd</i></b> file. Opening the sandbox and running the script worked. This means that the <i>LogonCommand</i> should also work, right? </p>
<script src="https://gist.github.com/andreasbotsikas/3d495725f7107e168e8113e077cf1c7d.js?file=wrong-startup.cmd"></script>
<p>Well, no, because the scripts needs to use full paths as mentioned in <a href="https://www.youtube.com/watch?v=4v0JmPEhaR8" target="_blank">this youtube video</a>.</p><span><a name='more'></a></span><p>In order to make the script work, I specified the full path name as follows:</p>
<script src="https://gist.github.com/andreasbotsikas/3d495725f7107e168e8113e077cf1c7d.js?file=startup.cmd"></script>
<p>The final <i>wsb</i> file is the following:</p>
<script src="https://gist.github.com/andreasbotsikas/3d495725f7107e168e8113e077cf1c7d.js?file=unifi_server.wsb"></script>
<p>In this file, I mount the <b><i>Ubiquiti UniFi</i></b>, which is found in my host machine <b><i>C:\sndboxfiles. </i></b>This folder contains the extracted files after installing the UniFi server through the <a href="https://www.ui.com/download" target="_blank">downloaded UniFi-installer.exe</a>. I also mount the <i>C:\sndboxfiles\Desktop</i> at the sandbox's user (<b>WDAGUtilityAccount</b>) Desktop. In that folder, I have the <i>VC_redist.x64.exe</i> file which I downloaded from Microsoft and the <i>startup.cmd</i> which installs Visual C++ quietly. The <b><i>LogonCommand </i></b>simply executes the script which now works, because the final script invokes the VC_redist.x64.exe with full path name, including the folders.</p>
Andreas M. Botsikashttp://www.blogger.com/profile/10477012039301070675noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-56770352846725483762024-01-05T16:02:00.000+02:002024-01-05T16:02:24.551+02:00Running Ubiquiti UniFi Network Server on Windows 11<p>I was trying to run the self-hosted <a href="https://www.ui.com/download/releases/network-server" target="_blank">UniFi Network Server v8.0.26</a> on a newly installed Windows 11 but I was getting a cryptic message "Server taking too long to start…" followed by a "Start-up failed" error. It turns out that VC140 runtime is a hidden requirement (Visual C++ redistributable aka MSVC) and in my case it worked fine installing the <a href="https://learn.microsoft.com/cpp/windows/latest-supported-vc-redist?view=msvc-170" target="_blank">latest MSVC for 2015,2017 and 2022 available in the official Microsoft site</a>. Here is how I discovered the missing dependency.</p><span><a name='more'></a></span><p>After installing the product, the whole application is extracted in "<b><i>%USERPROFILE%\Ubiquiti UniFi</i></b>". You will notice that the <b><i>logs</i></b>, <b><i>data</i></b>, <b><i>work</i></b> and <b><i>run</i></b> folders are automatically created the first time you execute the server. To start the server you can use the "<b><i>bin\start.bat</i></b>" file. In my case, I modified the 2nd line of the bat file to use "%USERPROFILE%\Ubiquiti UniFi\jre\bin\java.exe" executable which is the bundled version of java, instead of using the system's java.</p><p>Looking in the "<b><i>logs\server.log</i></b>" I observed the following <i><db-server> INFO db</i> logs repeating:</p><p></p><blockquote><p><i>- Database process stopped...</i></p><p><i>- Checking if database needs to be shut down</i></p></blockquote><blockquote><p><i>- Database was not running</i></p></blockquote><blockquote><p><i>- Database configuration, dir=</i></p></blockquote><p></p><p>In the <b><i>bin </i></b>folder, you will notice <b><i>mongod.exe</i></b> which is the mongo DB server used by the app. If you try to run this, you will get a much clearer error message saying that VC140 is missing.</p><p>Hope this process will help you solve other similar issues running UniFi network server.</p>Andreas M. Botsikashttp://www.blogger.com/profile/10477012039301070675noreply@blogger.com1tag:blogger.com,1999:blog-7761120478392541874.post-47021184490946199182023-10-16T00:44:00.004+03:002023-10-16T00:45:26.784+03:00Disabling critical battery action<p><span style="font-family: inherit;"> I have an old laptop with an almost dead battery, which I always use tethered to a power source, mostly as a media device connected to my TV. Occasionally, I need to move it, and if it's open, Windows realizes that the battery is very weak and decides to hibernate, as that's the default critical battery action. This can become very frustrating. The good news is that you can easily address this issue using Windows' <i>powercfg</i> command.</span></p><a name='more'></a><p><span style="font-family: inherit;">If you try to change that action using Control Panel -> Power Options -> Advanced power settings -> Battery -> Critical battery action, you'll notice that you can only choose between "Sleep", "Hibernate", and "Shut down". There's no "Do nothing" option as there is in the "Low battery action."</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimDOsEp9G-JgDy0XcseQ5Lt_wP4ia0k3E3neM0PRbvj19AkFsF2gnuo7CjB-eO0KnNkgfZ-1li-eRfPaTrS97gQxq2QF-XxcqZvd8RwBFRh6uY_GSap4NilwdfFjcAmMoQ_Qu06UHJZFK7i6B8Tcq8NeXbvc6Vn0OI1hOcnHMxNSArwfzLB7z4NZjFof4/s437/powerOptions01.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="437" data-original-width="400" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimDOsEp9G-JgDy0XcseQ5Lt_wP4ia0k3E3neM0PRbvj19AkFsF2gnuo7CjB-eO0KnNkgfZ-1li-eRfPaTrS97gQxq2QF-XxcqZvd8RwBFRh6uY_GSap4NilwdfFjcAmMoQ_Qu06UHJZFK7i6B8Tcq8NeXbvc6Vn0OI1hOcnHMxNSArwfzLB7z4NZjFof4/s320/powerOptions01.png" width="293" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p><span style="font-family: inherit;">Fortunately, Windows offers a powerful utility called <i>powercfg </i>that enables you to manage various power settings. In this case, we will use <i>powercfg </i>to prevent the laptop from hibernating on low battery. You'll need a PowerShell or Command Prompt (elevation is not needed), and you can use the following two commands to set "Do nothing" for both "On battery" and "Plugged in":</span></p><p><script src="https://gist.github.com/cbotsikas/be74c8c51f8a6dacd2521e3cab9d8355.js"></script></p><p><span style="font-family: inherit;">Please note the difference in the <i>set<b>dc</b>valueindex </i>and <i>set<b>ac</b>valueindex</i> parameters. The "0" at the end means "Do nothing".</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCh5YDCkV0el8-8OxT4AuGQ5HG1aCw0R5UOL3OAp-AHOr6I45xB1YNOskzs-J_aWIu843WP-wmSU2YNXlvJfsHKymB6TC7NVHOovHfC5tvhQpWULdVMheKP1wi_C8gzZ9ntq3PkdWAqLOzrxcJXhsLBn5qakc0caaxyMlPdFX_skq-wxpuVgifHS1MLpc/s1109/powerOptions02.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="625" data-original-width="1109" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCh5YDCkV0el8-8OxT4AuGQ5HG1aCw0R5UOL3OAp-AHOr6I45xB1YNOskzs-J_aWIu843WP-wmSU2YNXlvJfsHKymB6TC7NVHOovHfC5tvhQpWULdVMheKP1wi_C8gzZ9ntq3PkdWAqLOzrxcJXhsLBn5qakc0caaxyMlPdFX_skq-wxpuVgifHS1MLpc/w400-h225/powerOptions02.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p><span style="font-family: inherit;">I've tested this on Windows 10 but it should work on any Windows version.</span></p>Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-26176268654370589702023-07-04T18:26:00.002+03:002023-07-10T17:52:49.502+03:00Installing Windows 11 in QEMU on Supported Hardware<p>When attempting to install Windows 11 in QEMU, you may encounter the "unsupported hardware" message. This occurs because QEMU does not enable TPM 2.0 support by default.</p><span><a name='more'></a></span><p>To resolve this issue, you will need to add virtual hardware to the hardware list that passes requests through to the onboard chip, as demonstrated below (image from <a href="https://virt-manager.org/" target="_blank">virt-manager</a>):</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip5p1FO7WUbaYGd6neIzjm-kLkJUJn831zwWIx6yeM5J7sMSv9Ud7bpKQy6DVv7ezcxZTdRVOcu-X6Q3j0QgLRhTakDBLk722zhnpqSXci_YxO7ACsCqmO7uEqOQ1FbofnvlJfyjFn8-FiBtS6dCVKt_1QjCdqXvV1tECn0Gduyjv-4sr0bCP5jV57tQg/s744/add-tpm.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="649" data-original-width="744" height="279" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip5p1FO7WUbaYGd6neIzjm-kLkJUJn831zwWIx6yeM5J7sMSv9Ud7bpKQy6DVv7ezcxZTdRVOcu-X6Q3j0QgLRhTakDBLk722zhnpqSXci_YxO7ACsCqmO7uEqOQ1FbofnvlJfyjFn8-FiBtS6dCVKt_1QjCdqXvV1tECn0Gduyjv-4sr0bCP5jV57tQg/s320/add-tpm.png" width="320" /></a></div><p>After performing this step, you may receive an error stating that "<i>/dev/fdset/3</i>" "<i>is not a TPM device</i>". This is simply a permissions issue that can be resolved by adding the "<i><b>/dev/tpm0 rw,</b></i>" line in "<i>/etc/apparmor.d/abstractions/libvirt-qemu</i>" right before the line that reads "<i>/dev/net/tun rw,</i>" and restarting AppArmor if you have VMs already running. The final file should look like:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmeoBhjG4h_XQu4hyY14un-9JpBPr2Q6CNTpyMR16SfbDHpAmfp_9AK46X32_u-V3Tfz6MOVNbsZxfldmM5wIqmnVFy2VctUAM5TPNpD6aaJcMB-Yfb9nQCRyeAxhH0eMH7j898x7-DdqTaQprmC6UKsO2H9Nltt_2dZHVds5XyAFDsqtJwZohlIoov9g/s686/edit-libvirt-qemu.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="619" data-original-width="686" height="289" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmeoBhjG4h_XQu4hyY14un-9JpBPr2Q6CNTpyMR16SfbDHpAmfp_9AK46X32_u-V3Tfz6MOVNbsZxfldmM5wIqmnVFy2VctUAM5TPNpD6aaJcMB-Yfb9nQCRyeAxhH0eMH7j898x7-DdqTaQprmC6UKsO2H9Nltt_2dZHVds5XyAFDsqtJwZohlIoov9g/s320/edit-libvirt-qemu.png" width="320" /></a></div><div>Once Windows is installed, be sure to install the <a href="https://github.com/virtio-win/virtio-win-pkg-scripts" target="_blank">appropriate guest tool as discussed in this ReadMe file</a>.</div><div><br /></div><div>Additionally, I encountered a problem with screen resolution and was unable to adjust it in the guest OS. After some research, I discovered that selecting <b><i>Virtio</i></b> as the model in the "<i>Video Virtio</i>" settings resolved the issue. Naturally, I also wanted to enable 3D acceleration. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif82-9bI-8x7oeNHmlm1e0xcRicxq4RIzQ4m7coW0ATE6lSOREtiWSkYgLsUlWowLZPyNa6EH5i954SFHpBq6NrBCcQqiujK28AloH0yPvehyuRMAPQASyOmlkGnXCeDF-JD5Kq18L73bk9JnNizqmb7dV3fo_z3GA6HZhCx8mDDg3-7rdQRZuBDPvvts/s338/virtio-model.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="93" data-original-width="338" height="88" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif82-9bI-8x7oeNHmlm1e0xcRicxq4RIzQ4m7coW0ATE6lSOREtiWSkYgLsUlWowLZPyNa6EH5i954SFHpBq6NrBCcQqiujK28AloH0yPvehyuRMAPQASyOmlkGnXCeDF-JD5Kq18L73bk9JnNizqmb7dV3fo_z3GA6HZhCx8mDDg3-7rdQRZuBDPvvts/s320/virtio-model.png" width="320" /></a></div><div>However, upon doing so, I encountered an error stating "<i>OpenGL is not available</i>". To fix this, I navigated to "<i>Display Spice</i>", changed the <i>Listen type</i> to <i>None</i>, and enabled OpenGL support as illustrated below:</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWeoudM9PSoovouApg9sV0ncEFlZ-1sgH7Wib2K6B7Vi_Bg_QNy6R-oefZmjn-SNB9ZnJuzmau3_W5_qWeX6KG2BnHMIrqq2FJaJ7gIduQD8Dk79xRiA7oob8OOebwAWpBE8fTxeN9a2I0IHlkMQfZ4Iof2bCXFhIYkd0doDm2uFVQZlUV0LT38dy8bkA/s324/spice-settings.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="238" data-original-width="324" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWeoudM9PSoovouApg9sV0ncEFlZ-1sgH7Wib2K6B7Vi_Bg_QNy6R-oefZmjn-SNB9ZnJuzmau3_W5_qWeX6KG2BnHMIrqq2FJaJ7gIduQD8Dk79xRiA7oob8OOebwAWpBE8fTxeN9a2I0IHlkMQfZ4Iof2bCXFhIYkd0doDm2uFVQZlUV0LT38dy8bkA/s320/spice-settings.png" width="320" /></a></div></div><div>Lastly, I would like to share some helpful commands for starting/stopping VMs and backing up configurations:</div><div>
<script src="https://gist.github.com/andreasbotsikas/4afbeadf4df512b51702fe6c4ebc846e.js"></script>
</div>
<p><b>If you want to use bitlocker</b></p><p>I encountered a problem with BitLocker while using TPM passthrough, which required me to manually input the recovery key each time. To resolve this issue, I ended up using a TPM emulation, following the <a href="https://getlabsdone.com/how-to-enable-tpm-and-secure-boot-on-kvm/" target="_blank">steps outlined in this article</a>. It's important to choose the correct Linux codename during this process. I mistakenly selected "focal" instead of "jammy," resulting in an error related to libssl1.1 – a version not officially supported in Ubuntu 22.</p><p>Additionally, I needed to manually populate the keys as <a href="https://superuser.com/questions/1660806/how-to-install-a-windows-guest-in-qemu-kvm-with-secure-boot-enabled " target="_blank">detailed in this thread</a>, in order to enable secure boot in the UEFI BIOS. Be sure to review the comment below the answer, which addresses the missing "count=1" parameter; otherwise, you may end up with an excessively large <i>keys.img</i> file.</p><p><b>References:</b></p><p>https://askubuntu.com/questions/1365829/qemu-failed-to-passthrough-a-tpm-device</p><p></p><p>https://www.reddit.com/r/Fedora/comments/qqw3sq/qemu_video_virtio_opengl_not_available_after/</p><p>https://superuser.com/questions/1725915/auto-resize-vm-with-windows-greyed-out-since-graphics-type-vnc-does-not-supp</p>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-8658473076872067652023-05-06T23:51:00.001+03:002023-05-06T23:51:52.128+03:00Copy contacts from outlook to gmail<p>To transfer your contacts from Outlook to Gmail and keep the pictures, you need to export them as vCard files (vcf) and then import them into Google Contacts. However, Outlook does not support exporting multiple contacts as vCard files at once and neither does Google Contacts support importing multiple vCard files at once. You can use a workaround to export your contacts in batches and then combine them into a single file. Here are the steps:</p><span><a name='more'></a></span><p><br /></p><p></p><ol style="text-align: left;"><li>Open Outlook (the windows client) </li><li>(Optional) If you are using non Eglish characters, you need to change the preferred encoding of outlook to save vCards as Unicode (UTF-8), otherwise gmail won't import them correctly.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVYnIee11_Y3BIW78cdWVQTBn3lFPE2l5WGRqfKZlzprCS895qpjEYeMnPw7c0GF2YprnApRorvavXGkdtT5T08j4EcIjvBAhRTJ5pgqsrkqoe_9l-XPPVdmu7dilGB5m6qRIjkE-gDwUH30RippEM4ZSZeKHtKm_UQrmdM2K-qRlSjb9B3boRsxiL/s836/Unicode_vCards.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="442" data-original-width="836" height="211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVYnIee11_Y3BIW78cdWVQTBn3lFPE2l5WGRqfKZlzprCS895qpjEYeMnPw7c0GF2YprnApRorvavXGkdtT5T08j4EcIjvBAhRTJ5pgqsrkqoe_9l-XPPVdmu7dilGB5m6qRIjkE-gDwUH30RippEM4ZSZeKHtKm_UQrmdM2K-qRlSjb9B3boRsxiL/w400-h211/Unicode_vCards.jpg" width="400" /></a></div><br /></li><li>Navigate to the Contacts folder.</li><li>Select the contacts you want to export. You can use Ctrl+click or Shift+click to select multiple contacts.</li><li>From the ribbon, choose Forward Contact -> As a Business Card. This will create a new email message with the vCard files attached.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJRdNzf1l-L7ElLAGWTvw3GTgw243w-d6IbPMmkYDXWg6-LK33sOkkzHwx274Ihh-RKwz6ao3XIjxdGfkMJnvPQMBX2sPzFraTPgnDXXwRtG-1zU95PzOb48RrTLZrGQ0rfw0yHYnlPJF7uprs2MHKM8i9WlHMGefY8ehZLSP_C3dZP8revjSje48a/s1179/Export_As_Business_Card.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="488" data-original-width="1179" height="165" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJRdNzf1l-L7ElLAGWTvw3GTgw243w-d6IbPMmkYDXWg6-LK33sOkkzHwx274Ihh-RKwz6ao3XIjxdGfkMJnvPQMBX2sPzFraTPgnDXXwRtG-1zU95PzOb48RrTLZrGQ0rfw0yHYnlPJF7uprs2MHKM8i9WlHMGefY8ehZLSP_C3dZP8revjSje48a/w400-h165/Export_As_Business_Card.png" width="400" /></a><br /></div><br /></li><li><div class="separator" style="clear: both; text-align: left;">If nothing happens for a long time or you get a memory exception, repeat step 4, selecting fewer contacts. In my case, I had to batch them in groups of 50 contacts.</div></li><li>Once the new email message opens, select all attachments by right clicking in one of them and selecting "Select All" (or use the Ctrl+A shortcut) and then right click again to select the "Save As" to save all vCards to a folder on your computer.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBbGEBKouMa2wa3m-cBmSdWz1dEC8cJw8jk3RFhYFoow7wJOyXRkT13qlfCvnaN3DOWO-E7xvdDB0X5kqXBBshnq1kBpygKYJ8V24RzJMXhyxI27FQwIpUn0mGU_SVqWyFxm1yQ9GeplIMZtF58dW-5RH0DxIfFAVFiZhGMMK3GOQDlCfedIDGxcjs/s1440/Select_All.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="553" data-original-width="1440" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBbGEBKouMa2wa3m-cBmSdWz1dEC8cJw8jk3RFhYFoow7wJOyXRkT13qlfCvnaN3DOWO-E7xvdDB0X5kqXBBshnq1kBpygKYJ8V24RzJMXhyxI27FQwIpUn0mGU_SVqWyFxm1yQ9GeplIMZtF58dW-5RH0DxIfFAVFiZhGMMK3GOQDlCfedIDGxcjs/w400-h154/Select_All.png" width="400" /></a></div><br /></li><li>Repeat steps 4-7 for the rest of your contacts until you have exported all of them as vCard files.</li><li>Combine all the vCard files into a single file. To do that, open a command prompt window and navigate to the folder where you saved the vCard files. Type <b><i>copy *.vcf all_contacts.vcf</i></b> and press Enter. This will create a new file called all_contacts.vcf that contains all your contacts.</li><li>Log in to your Google account and go to Google Contacts.</li><li>Click on Import and then select the file all_contacts.vcf from your computer.</li><li>Click Import again to confirm.</li></ol><p></p><p></p><p></p><p>You have successfully imported your contacts from Outlook to Gmail. Gmail tags them with an “Imported on” label, which was a lifesaver while I was experimenting with the vCard encoding. It is unfortunate that migrating contacts between Outlook and Gmail is that complicated...</p><div><br /></div>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-44923423475483611872022-12-31T14:16:00.001+02:002023-01-11T17:55:27.609+02:00Dual boot two windows operating systems<p>I wanted a windows installation where I could <a href="https://github.com/PacktPublishing/Azure-Data-Scientist-Associate-Certification-Guide" target="_blank">write my book</a> and escape from the apps of my everyday windows operating system. You will find plenty of articles about dual boot with Linux, but I couldn't find one on how to dual boot two windows installations. </p><p>In hindsight, after talking to one of my colleagues, there is an easier way to do the same thing using just the windows boot disk and navigating through the <a href="https://www.windowscentral.com/software-apps/windows-11/how-to-create-a-dual-boot-setup-on-windows-11" target="_blank">UI wizard</a>. This article is still relevant if you want to understand what the wizard does, if you want complete control, or if you want to upgrade the OS in just one of your partitions.</p>
<span><a name='more'></a></span>
<script crossorigin="anonymous" integrity="sha512-STof4xm1wgkfm7heWqFJVn58Hm3EtS31XFaagaa8VMReCXAkQnJZ+jEy8PCC/iT18dFy95WcExNHFTqLyp72eQ==" referrerpolicy="no-referrer" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.3/jquery.min.js"></script>
<script src="https://rawgit.com/moski/gist-Blogger/master/public/gistLoader.js" type="text/javascript"></script>
<p>You will need the installation iso. You can download the <a href="https://www.microsoft.com/software-download/windows11">windows 11 iso directly from Microsoft</a>. If you are looking for windows 10, you need to download <a href=" https://www.microsoft.com/software-download/windows10" target="_blank">the tool</a> and then save the ISO file as shown in the following screenshots.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt6zXBW_YZcWTgEWTbjuJ7rWxMMF1W8EsSJcvP_YLPuKfcC4D_nFsUCBYYHy4Q_ZdCJomf4TbVi1Rz3TCCcQA88XDRW1o3NLPN1RoFvUmpRQZdlQAm1aIYne2IfPC8O3l9lxwh5ocqGVH-U0VpREBHJSJ9fjWPjIy67DAmEON2RyTZl1QSH7HUDjRI/s981/Windows10Iso.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="232" data-original-width="981" height="95" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt6zXBW_YZcWTgEWTbjuJ7rWxMMF1W8EsSJcvP_YLPuKfcC4D_nFsUCBYYHy4Q_ZdCJomf4TbVi1Rz3TCCcQA88XDRW1o3NLPN1RoFvUmpRQZdlQAm1aIYne2IfPC8O3l9lxwh5ocqGVH-U0VpREBHJSJ9fjWPjIy67DAmEON2RyTZl1QSH7HUDjRI/w400-h95/Windows10Iso.png" width="400" /></a></div>
<div>Once you get the ISO file, mount it by opening it with windows explorer. In the following examples, we assume that the iso was mounted in drive <b>E:</b>.Open PowerShell in Administrator mode and find the index of the image you want to install.</div>
<div class="gistLoad" data-file="FindIndex.bat" data-id="313914e29140bb0dadebbf417e0a7f16"><a href="https://gist.github.com/andreasbotsikas/313914e29140bb0dadebbf417e0a7f16" target="_blank">Loading gist</a></div>
<p>In my case, I wanted to extract Windows 10 Pro, so my index was six (6), as seen in the following screenshot:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL-va9mZqMyD55bv-74MghCzuNR6jOkL779qjIwsR6fmGRygFXlwYVtDq-UkdgqYD1cc-IUe63rzUT-MqoDE9Y_Gznde4Ns9tJwp1-1B7Jo-HAGoknN5F8NOm2TpDzLlmO5eOr9f66LTidNfROR4mjg3rlevSplP7DitRWa_ZDNSAg-InZYslHagFK/s532/Windows10ProIndex.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="160" data-original-width="532" height="96" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL-va9mZqMyD55bv-74MghCzuNR6jOkL779qjIwsR6fmGRygFXlwYVtDq-UkdgqYD1cc-IUe63rzUT-MqoDE9Y_Gznde4Ns9tJwp1-1B7Jo-HAGoknN5F8NOm2TpDzLlmO5eOr9f66LTidNfROR4mjg3rlevSplP7DitRWa_ZDNSAg-InZYslHagFK/s320/Windows10ProIndex.png" width="320" /></a></div><div>As a side note, when I upgraded to windows 11, I followed the same process, and it seems that Windows 11 Pro is also in index 6, but the image is slightly bigger.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghjhqsSRAt3xWHuimkUiuLCRHD665vpqVdD4YQugw81KENwAL4xpvIzOqxO2iJhn9vIm65LMHylPRwKHYtIuzxMkUek-gNts-Gs7HSBr1Z2foH1Vb5yL_hoJ6MUh7BQuL99ca_Lh9LRJWLBXyriYcfGwzVfP_i9N2JfNmdvnoSaIJnKiQ8nuXLBjQk/s420/Windows11ProIndex.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="139" data-original-width="420" height="106" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghjhqsSRAt3xWHuimkUiuLCRHD665vpqVdD4YQugw81KENwAL4xpvIzOqxO2iJhn9vIm65LMHylPRwKHYtIuzxMkUek-gNts-Gs7HSBr1Z2foH1Vb5yL_hoJ6MUh7BQuL99ca_Lh9LRJWLBXyriYcfGwzVfP_i9N2JfNmdvnoSaIJnKiQ8nuXLBjQk/s320/Windows11ProIndex.png" width="320" /></a></div>
Create a temporary folder in c: or wherever you want to extract the windows installation media (in this example, I created c:\tmp):
<div class="gistLoad" data-file="CreateFolder.ps" data-id="313914e29140bb0dadebbf417e0a7f16"><a href="https://gist.github.com/andreasbotsikas/313914e29140bb0dadebbf417e0a7f16" target="_blank">Loading gist</a></div>
Extract the target image there:
<div class="gistLoad" data-file="ExportWindowsImage.bat" data-id="313914e29140bb0dadebbf417e0a7f16"><a href="https://gist.github.com/andreasbotsikas/313914e29140bb0dadebbf417e0a7f16" target="_blank">Loading gist</a></div>
Once the process completes you should see the following:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8TJHaZESTV10G0c2qzcvUXslVnqPFcl8cZdrYrHkeOzbt0hb_n12De0dxkFUxAA9pXdyU5SI7xjeQsOASXnb-bENBC8BR1G2wy2AHmdAj0PmZbmLGu3PqgRJrH9BtwL3J200-x3xNIOu6IK_buXX8TpDhuyGNkCRG6pgxGP2L9C7WTg7m5lFolzBt/s860/ExportImage.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="183" data-original-width="860" height="68" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8TJHaZESTV10G0c2qzcvUXslVnqPFcl8cZdrYrHkeOzbt0hb_n12De0dxkFUxAA9pXdyU5SI7xjeQsOASXnb-bENBC8BR1G2wy2AHmdAj0PmZbmLGu3PqgRJrH9BtwL3J200-x3xNIOu6IK_buXX8TpDhuyGNkCRG6pgxGP2L9C7WTg7m5lFolzBt/s320/ExportImage.png" width="320" /></a></div><br />
Shrink one of your partitions to make room for the new windows partition:<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvWRBZFrYYYCZGmv0QzvWGwfPvTicSg1iE-PWruSRzpIKkKiyx_3vrb5zLP7cwkJn8gyZpgBbbFTulIdjhrXtH2A49EjIwN__vfEdvFR1-igASVEt6cUMhRWsKRs24Uyy_2pPzMGPDfi4nbVEkBDEz15CUlHggTnB7_8cAzPhxiO096dWdZgYP_GO3/s899/ShrinkVolume.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="529" data-original-width="899" height="188" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvWRBZFrYYYCZGmv0QzvWGwfPvTicSg1iE-PWruSRzpIKkKiyx_3vrb5zLP7cwkJn8gyZpgBbbFTulIdjhrXtH2A49EjIwN__vfEdvFR1-igASVEt6cUMhRWsKRs24Uyy_2pPzMGPDfi4nbVEkBDEz15CUlHggTnB7_8cAzPhxiO096dWdZgYP_GO3/s320/ShrinkVolume.png" width="320" /></a></div><br /><div><br />
Create the new partition where you will install your windows, and disable BitLocker for that drive if it is automatically enabled. Let’s assume that you mounted the new partition as drive <b>P:</b>.<br />
Apply the extracted image to that partition using the following command:
<div class="gistLoad" data-file="ApplyWindowsImage.bat" data-id="313914e29140bb0dadebbf417e0a7f16"><a href="https://gist.github.com/andreasbotsikas/313914e29140bb0dadebbf417e0a7f16" target="_blank">Loading gist</a></div>
In the end you should see the following:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0VxFJXkaroO71h547i6o2iaYV9PHAT9XSP3mSkUWz-VPQbxVfE04IClbJ1njr4KmDkCUgjG9Y6Nysv828qCMs1IUPf2OCRNErLPRgcqcRFXy9mQftCUL4ttU2fW8T_T_nX7EXyvxdWif5q81UyirG8UkXl-PqN9dEOr4e3l6s9I9j5u9I2JT3Iw0x/s886/ApplyImage.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="216" data-original-width="886" height="78" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0VxFJXkaroO71h547i6o2iaYV9PHAT9XSP3mSkUWz-VPQbxVfE04IClbJ1njr4KmDkCUgjG9Y6Nysv828qCMs1IUPf2OCRNErLPRgcqcRFXy9mQftCUL4ttU2fW8T_T_nX7EXyvxdWif5q81UyirG8UkXl-PqN9dEOr4e3l6s9I9j5u9I2JT3Iw0x/s320/ApplyImage.png" width="320" /></a></div><br /><div>Once this is finished, your new operating system is waiting for you to boot it up to finalize the setup process (drivers/user account etc.). </div><div><br /></div><div>To boot the new windows installation, you will need to use the bcdboot command and add the new windows installation path (<b>P:\Windows\</b>) to the list of operating systems you can use. To add it, use the following command:
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDFUwLxh45sDBuYWT5kxUVzlh28cQhz7TtbTOTiy5FAA_34VSg0MA92JtHF0-qJr7a97B02HncdC6p5wxvXT4Y7XvuQnNeAtg_Gefyr12-7XPP88bbpFHL9SjsOQkRx0c1PiAeYziH3zeOyHKTXIHu1iEor5n9ngalvI3aosRLB61t-XnaLAyxkbI_/s686/bcdboot_addlast.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="81" data-original-width="686" height="38" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDFUwLxh45sDBuYWT5kxUVzlh28cQhz7TtbTOTiy5FAA_34VSg0MA92JtHF0-qJr7a97B02HncdC6p5wxvXT4Y7XvuQnNeAtg_Gefyr12-7XPP88bbpFHL9SjsOQkRx0c1PiAeYziH3zeOyHKTXIHu1iEor5n9ngalvI3aosRLB61t-XnaLAyxkbI_/s320/bcdboot_addlast.png" width="320" /></a></div>
If you want to modify the record, you can list all UEFI firmware using the bcdedit command. Typing this command in an elevated PowerShell or command prompt will list all entries. Our target record is the one point to the <b>P:</b> drive, and the ID is one of the well-known ones <b>{default}</b>:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIzhSYmgbkEjdsOHmIcK4aj5B-Vv3_gduc0O4WiE6ZMPqsHctApzqKahAzZbNbhwcuc6hN7HPzuv2QXmahCt7dShCNteOCAftXdj8Qm-uA3l2LYoOdsd5vVQKrdylPjXJuvlfSteX1JXWFHxedSISBlamSmtLa2Ipotqrx8oh84Pn-fqOPZy-h_vOp/s900/Bcdboot_new_entry.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="461" data-original-width="900" height="164" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIzhSYmgbkEjdsOHmIcK4aj5B-Vv3_gduc0O4WiE6ZMPqsHctApzqKahAzZbNbhwcuc6hN7HPzuv2QXmahCt7dShCNteOCAftXdj8Qm-uA3l2LYoOdsd5vVQKrdylPjXJuvlfSteX1JXWFHxedSISBlamSmtLa2Ipotqrx8oh84Pn-fqOPZy-h_vOp/s320/Bcdboot_new_entry.png" width="320" /></a></div><br />
Let’s rename it to something more memorable:
<div class="gistLoad" data-file="ChangeOSDescription.bat" data-id="313914e29140bb0dadebbf417e0a7f16"><a href="https://gist.github.com/andreasbotsikas/313914e29140bb0dadebbf417e0a7f16" target="_blank">Loading gist</a></div>
You can also set how long you will wait for the OS to load and which OS to load first through the “Startup and Recovery” options of the System properties: <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Jjxv1Sm6Q30dBQAwhi6xLalZeds5HYureqfEIhd4MBEBkl-hYT4BSOK3lIKGXSD01SibPP24Sf_M_TZy4IjtGAN6p8hCJvaGtmVU77AVvwIH7C2RLvTFjcsJPl8GWl8xa3iYvnD3Mzxn2MlXiNedhToIv6pE0mQb1tjPllB7y0yvbNTAgkpMv2fd/s900/StartupSettings.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="645" data-original-width="900" height="229" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Jjxv1Sm6Q30dBQAwhi6xLalZeds5HYureqfEIhd4MBEBkl-hYT4BSOK3lIKGXSD01SibPP24Sf_M_TZy4IjtGAN6p8hCJvaGtmVU77AVvwIH7C2RLvTFjcsJPl8GWl8xa3iYvnD3Mzxn2MlXiNedhToIv6pE0mQb1tjPllB7y0yvbNTAgkpMv2fd/s320/StartupSettings.png" width="320" /></a></div><br /><div>After that, you will need to reboot your machine, select your new operating system, and finalize the setup, and that’s it. Once it’s done, you can use the new operating system as you would typically do. I even upgraded from windows 10 to 11 through the standard Windows upgrade process. </div><div><br /></div><div>The same process works if you want to install a new windows installation over an existing installation. In my case, I booted in my “<i>Personal windows</i>”, bitlock unlocked the other windows partition, formatted it, and used the <b>dism /Apply-Image</b> command to apply the windows 11 image. After that, I had to boot in my other windows partition as usual to complete the installation (no need to modify the boot loader). </div><div><br /></div><div>An alternative is to <a href="https://learn.microsoft.com/windows-hardware/manufacture/desktop/boot-to-vhd--native-boot--add-a-virtual-hard-disk-to-the-boot-menu" target="_blank">boot from VHDX</a>, which is slower than using the actual partition.</div><div><br /></div><div>Finally, if you are reading this article trying to install an isolated windows installation to try out some software or run an executable you don’t fully trust, you can use the <a href="https://learn.microsoft.com/windows/security/threat-protection/windows-sandbox/windows-sandbox-overview " target="_blank">windows sandbox</a>.</div><div><br /></div><div><b><u>References:</u></b></div><div>More info on the <a href="https://www.itechguides.com/convert-esd-to-wim/" target="_blank">extract image process</a>. </div><div>More info on the <a href="https://learn.microsoft.com/windows-hardware/manufacture/desktop/capture-and-apply-windows-system-and-recovery-partitions" target="_blank">apply windows image process</a>.</div><div>How to <a href="https://learn.microsoft.com/windows-hardware/drivers/devtest/changing-the-friendly-name-of-a-boot-entry" target="_blank">rename boot entry labels</a>.</div><div>BCDBoot reference: </div><div><ul style="text-align: left;"><li><a href="https://learn.microsoft.com/windows-hardware/manufacture/desktop/boot-to-vhd--native-boot--add-a-virtual-hard-disk-to-the-boot-menu " target="_blank">Boot to vhd</a></li><li><a href="https://learn.microsoft.com/windows-hardware/manufacture/desktop/bcdboot-command-line-options-techref-di" target="_blank">BDDBoot options</a></li></ul></div>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-2915501762114454512022-12-08T22:56:00.002+02:002022-12-08T22:56:52.553+02:00Convert VHS mpg files to mp4 to import in DaVinci Resolve<p>I recently got my hands on a bunch of mpg files ripped from VHS cassettes (32 files spanning 157 GB), and I wanted to edit them with DaVinci Resolve. To my surprise, mpg files are not supported, so I had to convert them. I choose to use FFmpeg and convert them into H.264/MPEG-4 AVC compression format with Advanced Audio Coding (AAC) using the following command:</p>
<script src="https://gist.github.com/andreasbotsikas/6fc6e65222b9a7fafa43284bd7cf752c.js"></script>
<a name='more'></a>
<p>The command <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/forfiles">loops through</a> the mpg files in the directory and converts them to mp4 files using the FFmpeg command. The end result was 3.4 times smaller, meaning that a 7.38 GB 2h and 5minute mpg file shrunk into 2.17 GB with the same quality.</p>
<p>Note: If you run into an “Unknown encoder ‘libx264’” issue, you will need to get an FFmpeg executable built with the corresponding flag set during build time. I used a build from <a href="https://www.gyan.dev/ffmpeg/builds/">gyan.dev</a>.
</p>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-79507049228882067902021-12-01T13:18:00.018+02:002023-05-12T10:22:35.617+03:00Genomics in Azure<p>Genomics analysis is an interesting field that has high computational and storage requirements. On top of that, there are compliance requirements, especially if the analysis happens on top of patient clinical data. This makes genomic research activities great candidates to run in an compliant elastic cloud, non other than Azure cloud which even has the recent <a href="https://docs.microsoft.com/en-us/compliance/regulatory/offering-nen-7510-netherlands" target="_blank">NEN 7510 standard</a> that is a mandatory requirement for all Netherland organizations that process patient health information.</p>
<a name='more'></a>
<p>When it comes to genomics, a big chunk of the workload is related to <a href="https://en.wikipedia.org/wiki/Genome-wide_association_study" target="_blank">genome-wide association study</a> (GWAS) which is the study of association between genomes and genetic variants on certain traits or diseases. This is done by mapping and aligning genomes and then running analysis on top of those sequences. The following diagram shows some of the components you may want to consider for an enterprise ready architecture that supports genomics workloads.</p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfqHYxTH0uqiT0T1FGYT0PTHFtrAhzoj0Vb0VaY9FUtJ08B9eW7Yx2edS8fd6O57HITa3BV06uSvzW6Z2MO_QHDeJq0Eg8Zi2izfFUAeizVq8VDjfvBxyzP-HWWv0BRWMcGHrtR_w92mc/s2048/Running_Genomics_In_Azure.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1141" data-original-width="2048" height="303" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfqHYxTH0uqiT0T1FGYT0PTHFtrAhzoj0Vb0VaY9FUtJ08B9eW7Yx2edS8fd6O57HITa3BV06uSvzW6Z2MO_QHDeJq0Eg8Zi2izfFUAeizVq8VDjfvBxyzP-HWWv0BRWMcGHrtR_w92mc/w544-h303/Running_Genomics_In_Azure.png" width="544" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Components commonly seen in genomics solutions in Azure (<a href="https://1drv.ms/p/s!Ao5RxwrMaoA5gnCPeHiaC4aBaQEL?e=Mein1P" target="_blank">see pptx version</a>)</td></tr></tbody></table><p></p><p>Let's start analyzing the diagram from the storage layer. Data coming out of a <a href="https://en.wikipedia.org/wiki/DNA_sequencer" target="_blank">sequencer</a> can be 100s of Gigabytes per single sample. So when you are up for population genomics, you can easily reach Petabytes and Exabytes scales. Here are some of the common things you will need to address:</p><p></p><ul style="text-align: left;"><li>Move data to the cloud. You can use <a href="https://docs.microsoft.com/en-us/azure/data-factory/introduction">Azure Data Factory</a> (ADF) to bring in data from publicly accessible sources, like other clouds, or you can use <a href="https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics">Azure Synapse pipelines</a> if you already use it or you want to enable the <a href="https://docs.microsoft.com/en-us/azure/synapse-analytics/security/workspace-data-exfiltration-protection">data exfiltration protection that Azure Synapse Analytics offers</a>. </li><li>Organize and store data. Have a look at <a href="https://aka.ms/adls/hitchhikersguide">https://aka.ms/adls/hitchhikersguide</a> which offers a great guide on the things you need to consider while structuring a data lake. Also make sure you read about the <a href="https://docs.microsoft.com/azure/storage/blobs/lifecycle-management-overview" target="_blank">life-cycle management capabilities of blob storage</a> that will help you phase out old research data to reduce the cost of storage and <a href="https://docs.microsoft.com/azure/storage/blobs/storage-blob-reserved-capacity" target="_blank">reserved capacity</a> that will reduce the overall storage ownership cost.</li><li>Govern data perhaps with <a href="https://docs.microsoft.com/en-us/azure/purview/overview">Azure Purview</a> and of course using the <a href="https://azure.microsoft.com/en-us/overview/trusted-cloud/compliance/">compliance enforcements tools available in Azure</a>.</li></ul><div>On the compute layer, if you are looking for a research environment where you have some extra compute, you can leverage the <a href="https://github.com/microsoft/genomicsnotebook/tree/main/genomics-data-science-vm" target="_blank">Genomics Data Science VM</a> which deploys a windows or linux Virtual Machine (VM) with all sort of common data science tools and genomics specific resources, like <a href="https://github.com/microsoft/genomicsnotebook/tree/main/sample-notebooks" target="_blank">sample notebooks</a> and <a href="https://www.bioconductor.org/packages/release/BiocViews.html#___Workflow" target="_blank">common genomics workflows that can run on top of Bioconductor</a>. This VM will get you started in minutes but it comes with all the pain of managing the VM, updating the operating system, backing up your configuration etc. Moreover, the whole processing is done within the same VM, so you can only scale up to more powerful and also more expensive <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/sizes" target="_blank">VM SKUs</a>.</div><p>The other approach is to scale out to multiple but smaller VMs, by breaking the job you have to do in multiple smaller tasks. This is where the genomics workflow managers come into play. Workflow managers help scientists scale an existing research process by guiding them in building repeatable and auditable workflows (something you may need, due to regulatory requirements). There are quite a few well-established workflow managers out there:</p><p></p><ul style="text-align: left;"><li><a href="https://github.com/broadinstitute/cromwell" target="_blank">Cromwell</a> from the Broad Institute of MIT and Harvard allows you to orchestrate this type of workloads using the <a href="https://github.com/openwdl/wdl" target="_blank">Workflow Description Language (WDL)</a> or <a href="https://www.commonwl.org/" target="_blank">Common Workflow Language (CDL)</a>. </li><li><a href="https://usegalaxy.org/">Galaxy</a> is another workflow manager that is widely used in on-premises deployments and works in Azure as well.</li><li><a href="https://github.com/snakemake/snakemake">Snakemake</a> is another popular tool, mainly because scientists use Python to define their workflows.</li><li>Other solutions like <a href="https://github.com/nextflow-io/nextflow">nextflow</a> that has great integration with Azure, <a href="https://www.nvidia.com/en-us/clara/genomics/" target="_blank">Parabricks that was acquired by NVIDIA</a> and <a href="https://www.bioconductor.org/">Bioconductor</a>.</li></ul><div>Most of those workflow managers, break the job in smaller tasks and then pass them to a task scheduler to log necessary auditing info, execute and monitor the actual tasks. Task schedulers are either implementations on top of the <a href="https://github.com/ga4gh/task-execution-schemas" target="_blank">Task Execution Service (TES) API</a>, <a href="https://slurm.schedmd.com/overview.html">Slurm Workload Manager</a> or <a href="https://openpbs.org/">OpenPBS</a>, all of which are schedulers coming from the <a href="https://azure.microsoft.com/solutions/high-performance-computing/" target="_blank">High Performance Computing (HPC)</a> field. Most of those schedulers integrate well with Azure native resources like:</div><div><ul style="text-align: left;"><li><a href="https://docs.microsoft.com/azure/cyclecloud/overview">CycleCloud</a> which is commonly used in hybrid scenarios. Manoj has a great two part blog in his LinkedIn profile showing how you can run Snakemake on top of CycleCloud. If you are interested start from his introductory blog "<a href="https://azure.microsoft.com/en-us/blog/power-your-genomic-data-analysis-on-azure-with-azure-cyclecloud/">Power your genomic data analysis on Azure with Azure CycleCloud</a>". Most of the schedulers support CycleCloud. For example, if you already have bash shell scripts for coordinating and submitting jobs to on-premises Slurm cluster, CycleCloud allows you to reuse the same scripts and operationalize them in cloud elasticity. If you are ok having a VM to host the control plane (CycleCloud comes from an on-premises first approach where you host everything), then this is probably a good option.</li><li><a href="https://docs.microsoft.com/en-us/azure/batch/batch-technical-overview">Azure Batch</a> is probably the most cost efficient service for running batch jobs. Even Azure Machine Learning (AzureML) is using it behind the scenes to implement scalable training processes. Microsoft has implemented <a href="https://github.com/microsoft/CromwellOnAzure/blob/master/src/TesApi.Web/BatchScheduler.cs">Cromwell TES Batch Scheduler</a> which you can probably port to any workflow manager that support TES (including <a href="https://snakemake.readthedocs.io/en/stable/executing/cloud.html#executing-a-snakemake-workflow-via-ga4gh-tes" target="_blank">Snakemake</a>) or you can use dedicated integrations like <a href="https://github.com/Azure/azure-hpc/tree/master/LifeSciences/SnakemakeBurst">SnakemakeBurst</a>.</li><li><a href="https://docs.microsoft.com/azure/aks/intro-kubernetes">Azure Kubernetes Service (AKS)</a>. Owning an AKS cluster is not trivial and you will need to have at least 3 VMs running. I would advise this option only if you already have an AKS cluster running and you want to utilize the spare compute or if you plan to host other loads in the AKS cluster and thus planning to get the know-how of owning an AKS cluster.</li></ul></div><div>Let's zoom into the Spoke Virtual Network reference on the diagram. Due to the sensitivity of the data handled in such solutions, it is common to see <a href="https://docs.microsoft.com/azure/security/fundamentals/isolation-choices#networking-isolation" target="_blank">networking isolation</a> on top of the <a href="https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection" target="_blank">identity protection that Azure Active Directory provides</a>. <a href="https://docs.microsoft.com/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli">The hub-spoke network topology</a> is widely adopted by enterprises to allow them scale within cloud. Smaller enterprises may start with the <a href="https://github.com/Azure/Enterprise-Scale/tree/main/docs/reference/treyresearch">Trey Research reference architecture</a> and scale from there.</div><div>With the network isolation in place, scientists need to get access to the isolated data and the various resources. If the scientists already have their own devices to support them on their daily work, they can re-use that investment and connect from their machines to the private network with technologies like <a href="https://docs.microsoft.com/azure/vpn-gateway/tutorial-site-to-site-portal" target="_blank">site to site VPN</a> and <a href="https://azure.microsoft.com/services/expressroute/" target="_blank">Express route</a> if they were working from on premises environment. Alternatively, scientists could connect with ad-hoc <a href="https://docs.microsoft.com/azure/vpn-gateway/point-to-site-about" target="_blank">point to site VPN</a> if they are working remotely. These approaches assume that the enterprise has already got a way to attest for the health of the devices, something that can be imposed using <a href="https://docs.microsoft.com/azure/active-directory/conditional-access/overview">Azure Active Directory's Conditional Access</a>. If the scientists don't have their own devices, then they can either leverage the VM solution mentioned above and get access to it through <a href="https://docs.microsoft.com/azure/bastion/bastion-overview">Azure Bastion</a> or the enterprise can adopt <a href="https://docs.microsoft.com/azure/architecture/example-scenario/wvd/windows-virtual-desktop" target="_blank">Azure Virtual Desktop</a>, probably with a <a href="https://docs.microsoft.com/en-us/azure/virtual-desktop/configure-host-pool-personal-desktop-assignment-type" target="_blank">personal desktop assignment</a> approach.</div><div><p>Microsoft has provided quite a few <a href="https://www.microsoft.com/genomics/" target="_blank">resources for folks to work with genomics is Azure</a>, enabling institutes to build <a href="https://terra.bio/" target="_blank">platforms like Terra</a> which eventually can support any type of research activity.</p><p>One of these resources is the <a href="https://github.com/microsoft/CromwellOnAzure" target="_blank">Cromwell on Azure</a> solution. With this resource, you can kick start easily a Cromwell workflow manager that uses TES as a task scheduler that schedules the jobs on top of Azure Batch. The Microsoft repository provides multiple examples that you can take and customize to your needs, under the <a href="https://github.com/microsoft/CromwellOnAzure#run-common-workflows" target="_blank"><i>Run Common Workflows</i> section of the readme file</a>. The idea is that you dockerize you existing tools (or use the existing <a href="https://hub.docker.com/r/broadinstitute/genomes-in-the-cloud/" target="_blank">docker images</a>) and Cromwell will execute them. For example, if you already have a pipeline that does mutation detection, you can containerize and standardize your pipeline using the <a href="https://github.com/openwdl/wdl" target="_blank">Workflow Description Language (WDL)</a> or the <a href="https://www.commonwl.org/" target="_blank">Common Workflow Language (CDL)</a> language. Learning WDL is pretty easy and you can leverage <a href="https://github.com/openwdl/learn-wdl" target="_blank">this great opensource learning path for WDL</a>. You then establish a process to migrate your data into blob storage where you can build your data providence with native storage account features for <a href="https://docs.microsoft.com/azure/storage/blobs/security-recommendations" target="_blank">security</a> and <a href="https://docs.microsoft.com/azure/storage/blobs/security-recommendations" target="_blank">data protection</a>. The data can be picked up by Cromwell to be processed and the results can be stored back in blobs for down stream analysis. Having the <a href="https://docs.microsoft.com/azure/storage/blobs/storage-how-to-mount-container-linux" target="_blank">data in a blob, you can easily attach them</a> in a Genomics Data Science VM which has all the downstream analysis tools you may be looking for in python, R or even Bioconductor.</p><p>If you are already familiar with the SPARK ecosystem, you can use <a href="https://glow.readthedocs.io/en/latest/tutorial.html">Glow to do GWAS on top of SPARK</a>. </p><p>Finally, if you want to avoid the hassle of building your own platform, you can take advantage of <a href="http://aka.ms/genomics" target="_blank">Microsoft's Genomics service</a>, which is a cloud implementation of the Burrows-Wheeler Aligner (BWA) and the Genome Analysis Toolkit (GATK) for secondary analysis. Alternatively, you can look at Microsoft's partners' turn-key solutions like <a href="https://illumina.github.io/dragen-azure-quickstart/3.9.03/">DRAGEN on Azure, provided by illumina</a> which uses the components mentioned above and provides out of the box pre-built pipelines.</p><p>As you saw, Azure provides you with a lot of options when it comes to genomics. From basic elastic infrastructure, to genomics notebooks, to individual components that do specific analysis, as well as end-to-end solutions. To decide what is the best fit for your needs, let's look into the following questions:</p><p></p><ul style="text-align: left;"><li>How much data do you have?</li><li>What are you looking to do with it? Are you looking to combine it with Electronic Medical Records (EMR) data or imaging data like <a href="https://docs.microsoft.com/azure/healthcare-apis/dicom/dicom-services-overview" target="_blank">Digital Imaging and Communications in Medicine (DICOM)</a> data?</li><li>What is the current state of your environment? Do you already have a pipeline? Are you looking to scale out the existing pipeline to handle more data?</li></ul><p></p><p>Once you have those answers you can decide which Azure components you will need. Feel free to use the <a href="https://docs.microsoft.com/azure/architecture/icons/" target="_blank">Azure Icons Pack</a> and my <a href="https://1drv.ms/p/s!Ao5RxwrMaoA5gnCPeHiaC4aBaQEL?e=P0vWCh" target="_blank">PowerPoint with the schemas shown in this blog post</a> to start designing your own architectures.</p><p>Looking forward seeing what type of solution you came up with, in your genomic journey!</p></div><br /><br />Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com1tag:blogger.com,1999:blog-7761120478392541874.post-16552733335726676412021-09-09T10:14:00.008+03:002021-12-03T00:02:33.626+02:00 Network link is disconnectedYou get a new device that has a 1Gb ethernet card (mine has the I219-LM) and you start getting random network disconnects. Been there and it is frustrating. There are a couple of reasons why this could happen (faulty drivers, faulty hardware, more than 100m from router) but in my case, it was a faulty cable that was easily identified with a ethernet cable continuity checker.<a name='more'></a>
I have a router far away from my machine. Both devices support gigabit connectivity, but they were auto negotiating and connecting at 100Mbps due to the faulty patch cable. The problem was that from time to time the network connectivity dropped, just like if someone pulled the plug. <div><br /></div><div>In the event viewer in the system log, I could see the following sequence of events from the e1dexpress source: </div><div><ul style="text-align: left;"><li><b>Warning</b>: Network link is disconnected.</li><li><b>Information</b>: Network link has been established at 100Mbps full duplex. </li></ul>As a workaround till I get a new patch panel, I disabled the auto negotiation feature of my network card and hardcoded the connectivity to 100Mbps full duplex. Probably will drop it to even less if I observe more disconnects.
To do that </div><div><ol style="text-align: left;"><li>Right-click Start menu button.</li><li>Click Device Manager. </li><li>Expand Network Adapter.</li><li>Right-click on the ethernet adapter that is disconnecting</li><li>Click Properties. </li><li>Click Advanced tab. </li><li>Click "Speed & Duplex" from the list of properties</li><li>Select 100 Mbps Half Duplex or even less. </li><li>Click OK. </li></ol><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzuskOunBOTFtsYvcsiAct50nnMuBXwUTPaxPkoM4BDbK5pPmc0sKm8NXBx6x_tgh_vZ3EjImsS7UN2zjBCBfqa34BeWXYLLByc3kHXSDzghFKP4Jl5zDOXInC1t5Yhm7nRKQwjo_pC0E/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="552" data-original-width="691" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzuskOunBOTFtsYvcsiAct50nnMuBXwUTPaxPkoM4BDbK5pPmc0sKm8NXBx6x_tgh_vZ3EjImsS7UN2zjBCBfqa34BeWXYLLByc3kHXSDzghFKP4Jl5zDOXInC1t5Yhm7nRKQwjo_pC0E/" width="300" /></a></div><br /><br /></div>Hope this helps</div>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-10906815831145169492020-12-24T14:29:00.002+02:002020-12-24T14:29:45.951+02:00Remove unknown locale qaa-Latn from windows<p>Got a fresh window 10 installation where I configure US keyboard as default and added Greek as well (the language with π and Σ symbols). I noticed that when I was switching languages a 3rd locale (qaa-Latn) appeared in the list which I couldn't remove from the windows list. In my case the keyboard was emitting Greek characters but the windows spelling couldn't recognize the words. In order to remove the extra locale I had to use the following powershell.</p>
<script src="https://gist.github.com/andreasbotsikas/bb9ed51d5f798e7a0b4edf6195e07cbd.js"></script>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-1507449443821911582020-09-20T18:59:00.004+03:002020-09-20T18:59:58.250+03:00Set proxy for command line in windows<p> Ever wanted to force tools like curl and az cli to pass their traffic through a proxy while running them in cmd.exe?</p><p>Set the following environment variables and netsh will automatically pick up them and use them for all network connectivity.</p>
<script src="https://gist.github.com/andreasbotsikas/86a92644da1f1ec00e58e2bad112c8f7.js"></script>Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-56715310350972430362020-02-18T23:32:00.003+02:002020-06-05T10:39:50.936+03:00Download new chromium based edge via powershell<div dir="ltr" style="text-align: left;" trbidi="on">
If you ever need to download the stable version of the chromium based Edge via powershell, you can use the following one liner:
<br />
<br />
<br />
<pre># Stable
# Invoke-WebRequest -Uri "https://c2rsetup.officeapps.live.com/c2r/downloadEdge.aspx?ProductreleaseID=Edge&platform=Default&version=Edge&source=EdgeStablePage&Channel=Stable&language=en" -OutFile "EdgeSetup.exe"
# Dev
Invoke-WebRequest -Uri "https://c2rsetup.officeapps.live.com/c2r/downloadEdge.aspx?ProductreleaseID=Edge&platform=Default&version=Edge&source=EdgeStablePage&Channel=Dev&language=en" -OutFile "EdgeSetup.exe"
</pre>
</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-11996110288228252182019-11-11T18:53:00.001+02:002019-11-11T18:55:26.674+02:00Fixing an old Wordpress plugin to run on PHP 7<div dir="ltr" style="text-align: left;" trbidi="on">
You should always update your CMS engine and individual plugins. Unfortunately this is not always possible especially when you have a non actively maintained website and you've used commercial theme or plugins.<br />
<br />
In my case, I had to deal with an old Wordpress site which was updated but was still running on an older PHP version 5.6.40. Changing the PHP version on the server to 7.3, the site broke with a sneaky <i>Call to undefined function mysql_error()</i> on the frontend and <i>[] operator not supported for strings</i> on the admin area. Both errors were caused from a plugin installed by the theme which doesn't support auto updates. More specifically Revslider 4.6.0 which looks like it was used by various themes in the past.<br />
I had no other option but to try and fix them manually since I couldn't downgrade to older PHP version anymore nor invest more resource to change/update the theme and plugins.<br />
<br />
<a name='more'></a>First I had to locate the files and lines causing the exceptions and then look suitable solutions. I was lucky and in my case both errors were easy to fix.<br />
<div>
<br />
The admin error, <i>[] operator not supported for strings in /wp-content/plugins/revslider/inc_php/framework/base_admin.class.php:72,</i> was a syntax error using the short array push syntax when the variable was initialised as a string. Changing the initialisation fixed it.<br />
<br />
For the site error, <i>Call to undefined function mysql_error()</i> in <i>/wp-content/plugins/revslider/inc_php/framework/db.class.php:29</i>, I've changed to the msql<b>i</b> alternative of the function which requires the mysqli link argument. Thankfully it's available via the global wpdb class of Wordpress, already utilized on this file.<br />
<br />
You can see my changes on the following two diffs.<br />
<br /></div>
<div>
<script src="https://gist.github.com/cbotsikas/76f7cf26d5a68e5c2d536263b6481baf.js"></script></div>
<div>
<br /></div>
</div>
Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-53148075833966504622018-04-05T00:37:00.002+03:002018-04-05T00:37:55.031+03:00Exchange view DSN mail contents from queue<div dir="ltr" style="text-align: left;" trbidi="on">
We had some DSN mails queueing up on our Exchange server's outgoing queue with error 450 4.7.1 and i was trying to figure out what were those mails in the first place. The <i>Subject</i> in all those mails looked like spam and since the <i>From Address</i> was empty, it got me kinda worried and curius at the same time. Using <i>Queue Viewer</i> wasn't helpfull since I couldn't see the origin of the DSN message. I had to use <i>Exchange Management Shell</i> to find out more info.<br />
<br />
In order to view the email contents so that you can check the body and find out more information about the cause/source of the DSN, you need to know the message <i>Identity</i>, suspend the message's from processing and then export it.<br />
<a name='more'></a><br />
<div style="text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVb11koHfmLXbAPYAl9tU4AmhpNoBEYy-Aj3eELhNMoeasNkF_YBokTbDxylua9mNWT9se1tDAq0qtuDSx8v6Ch2b49bQL8-JvgFwKtby3bvFLC1v0niej1UWu_h9SPwWSL0Lgra6pRIA/s1600/blog01.png"><img border="0" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVb11koHfmLXbAPYAl9tU4AmhpNoBEYy-Aj3eELhNMoeasNkF_YBokTbDxylua9mNWT9se1tDAq0qtuDSx8v6Ch2b49bQL8-JvgFwKtby3bvFLC1v0niej1UWu_h9SPwWSL0Lgra6pRIA/s400/blog01.png" width="400" /></a></div>
<br />
You can either find the <i>Identity</i> from the Queue Viewer or using PowerShell you can list all list your Message queue with one of the following commands:<br />
<blockquote class="tr_bq">
Get-message | format-list</blockquote>
<blockquote class="tr_bq">
Get-message | select Identity, MessageSourceName, Subject, LastError | where {$_.MessageSourceName -eq "DSN"}</blockquote>
Once you have the Identity, you need to suspend the message:<br />
<blockquote class="tr_bq">
Suspend-Message -Identity <i><YourMessageIdentity></i></blockquote>
Export the message using the command:<br />
<blockquote class="tr_bq">
Export-Message -Identity <i><YourMessageIdentity></i> | AssembleMessage -Path "<i><YourPath>\<Filename>.eml</i>"</blockquote>
You can resume the message for further processing in your queue:<br />
<blockquote class="tr_bq">
Resume-Message -Identity <i><YourMessageIdentity></i></blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd5TojwTHj2dtezXNH_P366e_QUE-eC53URx8FCs6V_H0pWCFl0ke-4j-wyT2Qva7e6o0ui32KTqN8DSJUyVUSR5BpLmnjaWHjayMNcQzYu0FmX0cV1LOeIIQ-YMWV5Z7Ojh-GNWaKijg/s1600/blog04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="316" data-original-width="1407" height="88" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd5TojwTHj2dtezXNH_P366e_QUE-eC53URx8FCs6V_H0pWCFl0ke-4j-wyT2Qva7e6o0ui32KTqN8DSJUyVUSR5BpLmnjaWHjayMNcQzYu0FmX0cV1LOeIIQ-YMWV5Z7Ojh-GNWaKijg/s400/blog04.png" width="400" /></a></div>
<br />
Now I was able to open the mail contents with notepad and investigate further on the DSN cause. In our case, as I suspected, those were NDRs for non existing mail addresses, nothing to worry about.<br />
<br />
Further reading:<br />
<ul style="text-align: left;">
<li><a href="https://technet.microsoft.com/en-us/library/aa998625(v=exchg.160).aspx" target="_blank">Export messages from queues</a></li>
<li><a href="https://technet.microsoft.com/en-us/library/aa997212%28v=exchg.80%29.aspx" target="_blank">How to View Messages</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/module/exchange/mail-flow/get-message?view=exchange-ps" target="_blank">Get-Message</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/module/exchange/mail-flow/Suspend-Message?view=exchange-ps" target="_blank">Suspend-Message</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/module/exchange/mail-flow/export-message?view=exchange-ps" target="_blank">Export-Message</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/module/exchange/mail-flow/Resume-Message?view=exchange-ps" target="_blank">Resume-Message</a></li>
</ul>
<br />
<br />
<br />
<br />
<br />
<br /></div>
Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-51643477575740443452017-07-25T15:39:00.000+03:002017-07-25T15:41:52.557+03:00DNN site's search issue with index outside bounds of array<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="tr_bq">
I had an issue with a DotNetNuke 7.4 site's DB size. It had grown way too big and we found out about it because of the server's warnings about low disk..</div>
<div>
Both Exceptions and EventLog tables had three new logs (General Exception, Scheduler Event Failure, Scheduler Exception) every 30 seconds saying that the "Index was outside the bounds of the array."</div>
<div>
<a name='more'></a>It was obvious that there was an issue with the site's search indexing. Search functionality wasn't working either. Re-indexing the content or restarting the application pool didn't help.<br />
<br /></div>
<div>
In order to fix this issue i had to do the following:<br />
<ol style="text-align: left;">
<li>Stop site's application pool from IIS.</li>
<li>Delete all contents in <path_to_your_DNN>\App_Data\Search\ folder.</li>
<li>Start site's application pool.</li>
<li>Re-index content from the site's Admin -> Search Admin page.</li>
</ol>
<div>
Hopefully this will help someone else too.</div>
</div>
<div>
<br /></div>
<div>
For the record, the Stack Trace was t<block>he following:</block><br />
<blockquote class="tr_bq">
at Lucene.Net.Index.SegmentTermDocs.ReadNoTf(Int32[] docs, Int32[] freqs, Int32 length)<br />
at Lucene.Net.Search.TermScorer.NextDoc()<br />
at Lucene.Net.Search.ConjunctionScorer..ctor(Similarity similarity, Scorer[] scorers)<br />
at Lucene.Net.Search.BooleanScorer2.AnonymousClassConjunctionScorer..ctor(Int32 requiredNrMatchers, BooleanScorer2 enclosingInstance, Similarity defaultSimilarity, IList`1 requiredScorers)<br />
at Lucene.Net.Search.BooleanScorer2.MakeCountingSumScorerSomeReq()<br />
at Lucene.Net.Search.BooleanScorer2..ctor(Similarity similarity, Int32 minNrShouldMatch, List`1 required, List`1 prohibited, List`1 optional)<br />
at Lucene.Net.Search.BooleanQuery.BooleanWeight.Scorer(IndexReader reader, Boolean scoreDocsInOrder, Boolean topScorer)<br />
at Lucene.Net.Index.DocumentsWriter.ApplyDeletes(IndexReader reader, Int32 docIDStart)<br />
at Lucene.Net.Index.DocumentsWriter.ApplyDeletes(SegmentInfos infos)<br />
at Lucene.Net.Index.IndexWriter.ApplyDeletes()<br />
at Lucene.Net.Index.IndexWriter.DoFlushInternal(Boolean flushDocStores, Boolean flushDeletes)<br />
at Lucene.Net.Index.IndexWriter.DoFlush(Boolean flushDocStores, Boolean flushDeletes)<br />
at Lucene.Net.Index.IndexWriter.PrepareCommit(IDictionary`2 commitUserData)<br />
at Lucene.Net.Index.IndexWriter.Commit(IDictionary`2 commitUserData)<br />
at DotNetNuke.Services.Search.Internals.LuceneControllerImpl.Commit()<br />
at DotNetNuke.Services.Search.SearchEngineScheduler.DoWork()</blockquote>
</div>
</div>
Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com1tag:blogger.com,1999:blog-7761120478392541874.post-12543374032466506612016-02-13T13:33:00.000+02:002016-02-13T13:35:56.063+02:00Recover records after accidental cascade delete<div dir="ltr" style="text-align: left;" trbidi="on">
Cascade delete in an sql table relation may sometime lead to serious data losses. Should this occur, you can make use of the log file (LDF) to review the DELETE statements that took place and hopefully manage to recover your lost data.<br />
<a name='more'></a>When you realize that an accidental delete has occurred, stop using the database. If you can, stop the sql server instance and copy the mdf and ldf files of the affected database. In my case, I was called a couple hours later on but no harm was done as there was no backup process and the ldf file was intact.<br />
You will need to parse the ldf entries to detect the deleted entries. I <a href="http://raresql.com/2011/10/22/how-to-recover-deleted-data-from-sql-sever/">found a blog post</a> which offers a stored procedure to do that, but unfortunately this didn't work for me as I was getting an error on the compatibility level (which was wrong). This is why I ended up using <a href="http://www.systoolsgroup.com/sql-log-analyzer.html">SysTools SQL Log Analyzer</a> which as a demo can show you the log file entries. If you want to export the entries, you will have to buy the product.<br />
This tool works with offline files. This is why I mentioned above to copy the mdf and ldf files. It reads the log and shows the INSERT,UPDATE and DELETE statements on each and every table you have. The beauty of this tool is that is also show the deleted row.<br />
When the analysis of the ldf file finished, I selected all entries in the tree on the left and selected export. Then I used the "Sql server database" option and exported the deleted only rows to a new database which was created by the tool. Note that I also specified a data filter in order to minimize the number of data.<br />
After a couple of hours (depending on the size of lost data) you will end up with a new database that has similar tables to the original one and each table will contain the deleted row.<br />
After that, head to <a href="https://msdn.microsoft.com/en-us/library/ms186335(v=sql.120).aspx">the bulk insert msdn article</a> that mentions how to export and import the data using the bcp command. The process is fairly simple as you have to issue the following commands for each recovered table in the newly created RecoveredDatabase:<br />
<br />
<i>bcp RecoveredDatabase.dbo.MyTable out MyTable-n.Dat -n -T
bcp RecoveredDatabase.dbo.MyTable format nul -n -x -f MyTable-f-n-x.Xml -T </i><br />
<br />
This can be easily scripted into a bat file using Excel. You just have to copy the table names from the "Object Explorer Details" window while having selected the Tables folder in the "Object Explorer" of the Sql server management studio. Then do some string manipulation and you will be able to get the complete list of commands in no time.<br />
Having exported your data, you will want to bulk insert them using the identity insert option described in the msdn article above. To do that, you will have to copy the exported files in C:\RecoveredData\ folder on the machine where you have the original database attached (hopefully you have already started the instance if you did close it down as I mentioned above) and then you will have to issue the following command for each exported table:<br />
<br />
<i>bcp OriginalDatabase.dbo.MyTable in C:\RecoveredData\MyTable-n.Dat -f C:\RecoveredData\MyTable-f-n-x.Xml -E -T</i><br />
<br />
Using the same excel file mentioned above, ordering the tables in a way that parent tables get to be inserted first and then the dependent child tables, you will be able to generate the complete list of bcp insert commands.<br />
Hopefully you are as lucky as my friend was and you will be able to restore all your records and then have some time to reflect on your backup policy and perhaps whether you do need the Cascade delete that caused all this.</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-90916334886477854892015-10-17T11:49:00.001+03:002015-10-17T12:15:32.841+03:00Move Azure blobs between containers<div dir="ltr" style="text-align: left;" trbidi="on">
I finally got some time to tidy up my Azure account. The first thing I wanted to do was to bring all my vms in a single storage, gathering them from various subscriptions. In order to do that I had to copy the vhds (located in the vhds container of each storage account) to their final destination.
</div>
<a name='more'></a>
<div dir="ltr" style="text-align: left;" trbidi="on">
This is how I ended up creating the following simple program. It schedules the transfer of the blob and then monitors the status of the blobs in order to get an indication of when the process is done.<br />
If you are more into powershell, you can follow <a href="https://azure.microsoft.com/en-us/blog/migrate-azure-virtual-machines-between-storage-accounts/">this</a> tutorial which also mentions how to mark the moved vhd as a VM image (Virtual Machines->Disks->Create).
<script src="https://gist.github.com/andreasbotsikas/a0500aa976896d070469.js"></script>
</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-32182737515620876622015-09-15T05:08:00.000+03:002015-09-15T23:38:10.021+03:00Cancel/Stop DiskPart full disk format<div dir="ltr" style="text-align: left;" trbidi="on">
Using <a href="https://technet.microsoft.com/en-us/library/cc766465(WS.10).aspx" rel="nofollow" target="_blank">DiskPart</a> to clean up hard disk partitions and MBR is fast and easy but you can accidentally invoke a full format which can take a while to finish. In that case, instinctively you would press Ctrl+C only to find that you've just killed DiskPart instead of stopping the format process.<br />
Googling around, people suggest to kill the process using task manager (which you can't) or reboot your system. There is yet an other option, which worked for me, and no one mentioned (at least on the sites I checked).<br />
<br />
<a name='more'></a>You can take your disk offline! Doing so, it will stop the format process and by bringing the disk back online, you can continue working with it. Unfortunately, this solution works only with HDD (external or internal), not with flash drives (not supported operation). Stopping the format process will leave your partition in RAW state, which means it's inaccessible until you format it. If you want to recover data or partitions, you need to use recovery tools which I'm not covering with this post.<br />
<div>
</div>
<div>
<br /></div>
<div>
In order to take your disk offline you can use either DiskPart or Disk Management (Start -> Run -> diskmgmt.msc). </div>
<div>
With DiskPart you need to select your disk and then use the "<i>offline disk</i>" command. Here's an example:</div>
<div>
<blockquote class="tr_bq">
C:\Windows\system32><b>diskpart</b><br />
DISKPART> <b>list disk</b><br />
Disk ### Status Size Free Dyn Gpt<br />
-------- ------------- ------- ------- --- ---<br />
Disk 0 Online 238 GB 0 B<br />
Disk 1 Online 186 GB 0 B<br />
DISKPART> <b>select disk 1</b><br />
Disk 1 is now the selected disk.<br />
DISKPART> <b>offline disk</b><br />
DiskPart successfully offlined the selected disk.<br />
DISKPART> <b>online disk</b><br />
DiskPart successfully onlined the selected disk.</blockquote>
</div>
<div>
Using Disk Management, Right click on the disk -> Offline.</div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSxrWqQLxh3cyIWxupzXzJ5wbKahzDp82qj4JwB1S4hdXuNcZMIRmVCBpEZkjA8fLYLXBo0m3udOj9hKp0NAbGsFM2SDS-l7Wh8NiOIW9TDoVWBNpaCVd4UlMjexs_LehvtnxEMKXBuQ0/s1600/diskmanageroffline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSxrWqQLxh3cyIWxupzXzJ5wbKahzDp82qj4JwB1S4hdXuNcZMIRmVCBpEZkjA8fLYLXBo0m3udOj9hKp0NAbGsFM2SDS-l7Wh8NiOIW9TDoVWBNpaCVd4UlMjexs_LehvtnxEMKXBuQ0/s320/diskmanageroffline.png" width="306" /></a></div>
<br />
If you're interested, here's how I was cleaning my old disks from partitions and Active (bootable) status:<br />
<blockquote class="tr_bq">
C:\Windows\system32><b>diskpart</b><br />
DISKPART> <b>list disk</b><br />
Disk ### Status Size Free Dyn Gpt<br />
-------- ------------- ------- ------- --- ---<br />
Disk 0 Online 238 GB 0 B<br />
Disk 1 Online 186 GB 0 B<br />
DISKPART> <b>select disk 1</b><br />
Disk 1 is now the selected disk.<br />
DISKPART> <b>clean</b><br />
DiskPart succeeded in cleaning the disk.<br />
DISKPART> <b>create partition primary</b><br />
DiskPart succeeded in creating the specified partition.<br />
DISKPART> <b>format fs=ntfs <u>quick</u></b><br />
100 percent completed<br />
DiskPart successfully formatted the volume.<br />
DISKPART> <b>assign</b><br />
DiskPart successfully assigned the drive letter or mount point.</blockquote>
Notice the <i>quick</i> parameter at the end of the format command which at some point I forgot and inspired this post.<br />
<br />
If you need to do some DiskPart tasks repetitively, you should check <a href="https://technet.microsoft.com/en-us/library/cc766465(WS.10).aspx#sectionSection2" rel="nofollow" target="_blank">DiskPart Scripting</a> where you can automate the process passing a text file containing the commands you want to execute, for example:<br />
<blockquote class="tr_bq">
C:\Users\Christos\Desktop><b>diskpart /s formatDisk1.txt</b></blockquote>
In my case, formatDisk1.txt content is:<br />
<blockquote class="tr_bq">
list disk<br />
select disk 1<br />
clean<br />
create partition primary<br />
format fs=ntfs quick<br />
assign</blockquote>
</div>
</div>
Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com7tag:blogger.com,1999:blog-7761120478392541874.post-80182297446770382152015-09-09T11:05:00.004+03:002015-09-09T11:05:32.282+03:00Converting/Exporting mixed encoding MySQL data to UTF8<div dir="ltr" style="text-align: left;" trbidi="on">
I had to move an old MySQL database storing the info of a Greek website, and guess what; the default schema collation was <i>latin1_swedish_ci</i> and the charset <i>latin1</i>, the defaults of MySQL instance (which no one changes during installation) :/<br />
The schema contained a mixture of tables, some of them in <i>latin1_swedish_ci</i> collation and some other with the proper UTF8 settings. Trying to export the data from either MySQL Workbench, phpMyAdmin and host's panel I was getting an ANSI encoded sql file. Normally, that's fine but if your data contains UTF8 characters (i.e. Greek letters) then you've got a problem.<br />
<br />
<a name='more'></a> The Greek word "νέο" from a utf8 table is shown as "Ξ½Ξ-ΞΏ" but the word "Φωτογραφία" from a latin1 table is shown as "Γ¦ÉΓβ€ΓΒΏΓΒ³ΓΒΓΒ±Γβ€ ΓΒ―ΓΒ±". Changing the file encoding to UTF-8 will fix the utf8 strings but the latin1 strings are completely ruined.<br />
In order to fix that, I had to split the export/import task for the utf8 and the latin1 tables.<br />
<br />
For the utf8 tables, I just used one of my dumps changing the file encoding to UTF-8.<br />
The easiest way to do that in Windows is using Notepad++ (Encoding -> Encode in UTF-8 <b>not </b>Convert to UTF-8). If you are in Linux, I guess <i>iconv</i> is the easiest way.<br />
<br />
For the latin1 tables, I had to execute <i>mysqldump </i>specifying the character set:<br />
<blockquote class="tr_bq">
mysqldump -p[PASSWORD] --default-character-set=latin1 --add-drop-table [DB_NAME] > db_latin1.sql</blockquote>
Again, changing the file encoding to UTF-8, will fix the strings from the latin1 tables. This time you can remove all the statements related with the utf8 tables and replace the "latin1" charset statements with "utf8". Now you can either manually merge the two dump files into one, or you could import both dump files in a new schema.<br />
<br />
Related posts:<br />
<ul style="text-align: left;">
<li><a href="https://blogs.law.harvard.edu/djcp/2010/01/convert-mysql-database-from-latin1-to-utf8-the-right-way/">https://blogs.law.harvard.edu/djcp/2010/01/convert-mysql-database-from-latin1-to-utf8-the-right-way/</a> </li>
<li><a href="http://bdunagan.com/2011/09/29/converting-mysql-from-latin1-to-utf8/">http://bdunagan.com/2011/09/29/converting-mysql-from-latin1-to-utf8/</a></li>
<li><a href="https://www.void.gr/kargig/blog/2009/03/16/convert-greek-characters-from-latin1-mysql-database-fields-to-pure-utf8/">https://www.void.gr/kargig/blog/2009/03/16/convert-greek-characters-from-latin1-mysql-database-fields-to-pure-utf8/</a></li>
</ul>
</div>
Christos Botsikashttp://www.blogger.com/profile/11025565251556231416noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-67901831922513908712015-07-03T19:16:00.000+03:002015-10-17T11:52:18.446+03:00Homemade certificates for the web developers<div dir="ltr" style="text-align: left;" trbidi="on">
Working with the web, you will definitely end up having to generate a trusted certificate at least for your localhost. In my case, I have been working with certificates a bit more and the need of a personal CA was obviously the best solution. Moreover, I wanted to modify Fiddler's CA name to avoid having the ugly "DO_NOT_TRUST_FiddlerRoot". This post describes how I automated the certificate generation process and also mitigated the <a href="https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/" target="_blank">Firefox's warning about the old SHA1 hashing</a>.<br />
<a name='more'></a><br />
Generating the required certificates with the use of visual studio is a three step process.<br />
<ol style="text-align: left;">
<li>
Load visual studio command line tools in the command prompt: This is done doing a call "%VS120COMNTOOLS%..\..\vc\vcvarsall.bat" where VS120COMNTOOLS is an environment variable pointing to the path of the visual studio 2013 (aka vs120) tools.
</li>
<li>Generate a CA specifying the -cy authority attribute in the makecert tool. Also note that I am using sha256 and a key length of 2048 in order to address the phasing out warning firefox is flooding you with in the debug console.
</li>
<li>Generate the CN=localhost certificate. Note that you could use multiple CNs making a Subject Alternative Name (SAN) certificate using the , separator like “CN=localhost, CN=ubersite.eu, CN=*.locahost”. </li>
</ol>
Having these two certificates, you can add the public key of the CA in the machine’s trusted root certificate authorities and both the private and the public key in the machine’s My store in order to allow IIS to use it in its https binding.
These tasks could be done manually (export cer and pfx files from User’s My store and import them in the corresponding locations using the mmc) but powershell comes to the rescue when you want to automate these tasks. <br />
<br />
As a bonus, on this script I generate yet another intermediate CA that fiddler will use in order to intercept the https web traffic and replace the scary and ugly “DO_NOT_TRUST_FiddlerRoot”. First you need to generate the certificate. I gave it a friendlier name that will remind me that fiddler is intercepting the traffic and then setup the two registry keys required to change the default certificate for fiddler. <br />
These keys are located in HKEY_CURRENT_USER\Software\Microsoft\Fiddler2 and the certificate fiddler is looking for uses the following name “CN={MakeCertRootCN}{MakeCertSubjectO}” which by default (if the keys are not found) has the value “CN=DO_NOT_TRUST_FiddlerRoot, O=DO_NOT_TRUST, OU=Created by <a href="http://www.fiddler2.com/">http://www.fiddler2.com</a>”.
<br />
<br />
Hope you enjoy the following batch file and happy web development :)
<script src="https://gist.github.com/andreasbotsikas/ff23a4f9b01e6bbdf125.js"></script>
</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-81599967567019072992015-04-01T23:35:00.003+03:002015-04-01T23:36:06.809+03:00Set / clear windows proxy via command line<div dir="ltr" style="text-align: left;" trbidi="on">
It is a very common task to set and clear the proxy setting depending on the client's network configuration.
I was frustrated going through Tools-->Internet options-->Connections tab-->LAN settings and configuring the proxy settings so I ended up having a couple of batch scripts that set the client's proxy and one to reset the settings when I am in a more relaxed environment where traffic passes through the normal gateway... Note that I have created shortcuts and set them to "Run as administrator" as the following commands require elevation.
</div>
<script src="https://gist.github.com/andreasbotsikas/37a6dcafe1e84b6c1cf2.js"></script>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-83903863606706520502014-06-26T09:39:00.000+03:002014-06-26T09:39:34.685+03:00Toshiba AuthenTec duplicate fingerprints<div dir="ltr" style="text-align: left;" trbidi="on">
If you have a Toshiba with AuthenTec biometric device and you forgot to clear the fingerprints before formatting your windows 8.1 installation, you will face the issue of not being able to register the same fingers in your newly created Microsoft Account. The issue steams from the buggy implementation of the Toshiba Fingerprint Utility that doesn't handle domain accounts that well. Moreover, the fingerprint biometric data are stored inside the hardware, so no matter what file you delete, you won't be able to fix this. Some people on the net report that a simple export and import of the data (using the utility as administrator) worked for them. All the reports were mentioning windows 7 so I guess they were all facing an issue with local accounts and not domain ones.<br />
<div>
Luckily, Toshiba discovered the issue and offers a command line tool to delete all fingerprint data from the hardware. The name of this tool is <a href="http://support.toshiba.com/support/viewContentDetail?contentId=4003856" target="_blank">TFPU WBF Delete Fingerprint Tool</a> (a.k.a. tc30636900b.exe) and this made the trick for me.</div>
</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com3tag:blogger.com,1999:blog-7761120478392541874.post-64241952470354801652014-05-28T18:00:00.000+03:002014-05-28T18:00:35.540+03:00Renew expired pfx certificates for click once applications<div dir="ltr" style="text-align: left;" trbidi="on">
If you used the click once technology back in visual studio 2005 you have saved yourself a lot of hassle deploying newer versions, especially during the UAT phase. Unfortunately, if you used a dev certificate from visual studio and now you have to do a minor update, you will discover that the certificate has expired and visual studio won't allow you to use it to create a new click once deployment. Although Microsoft has suggested a <a href="http://support.microsoft.com/kb/925521" target="_blank">couple of ways</a> to tackle the problem, replacing the certificate is not an option in most of the cases and the provided code doesn't work that well. <a href="http://may.be/renewcert/" target="_blank">Cliff Stanford </a>has fixed the provided codebase and I have uploaded a <a href="https://github.com/andreasbotsikas/RenewCert/releases/" target="_blank">slightly modified version in github</a> in order to preserve it.<br />
The main modification is that I add 105 years to the expiration date, in order to avoid redoing the process every 5 years :)<br />
Enjoy</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-19254375251708677832014-04-12T13:02:00.002+03:002014-04-12T13:03:51.372+03:00Create hotspot on your windows machine<div dir="ltr" style="text-align: left;" trbidi="on">
Being a developer, you might want to test your localy hosted websites with various mobile devices. The easiest way is to create a virtual hotspot and have the devices connected there. This can be easily done executing the first two commands of the following gist in an administration command prompt (run as admin):<br />
<script src="https://gist.github.com/andreasbotsikas/10527805.js"></script>
Once connected, your devices will get an ip in the range of 192.168.x.y (mine was 192.168.173.something) and the pc will be available on the 192.168.x.1 ip.<br />
In order to disable the hotspot, just run the last command on the gist.</div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0tag:blogger.com,1999:blog-7761120478392541874.post-11055747689486079582014-03-23T21:42:00.000+02:002014-03-23T21:42:00.450+02:00Grant execute on sql user<div dir="ltr" style="text-align: left;" trbidi="on">
Back in the early days of sql server, stored procedures were much faster compared to normal queries. This meant that a typical system would host more than 200 stored procedures (hopefully on the same schema). In order to grant execute access to a role or user you can use the following snippet
<script src="https://gist.github.com/andreasbotsikas/9728585.js"></script></div>
Andreas Botsikashttp://www.blogger.com/profile/01752587180565072980noreply@blogger.com0