Azure Devops Pipelines: Software available on Microsoft hosted agents

Published 2018-11-14, 15:27

https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#software contains links to READMEs on GitHub that list the available software

Topic(s): Development, Link No comments - :(

Azure Devops Pipelines: Install and start an Android Emulator

Published 2018-11-14, 15:17

#!/usr/bin/env bash

echo "y" | $ANDROID_HOME/tools/bin/sdkmanager --install 'system-images;android-27;google_apis;x86'

$ANDROID_HOME/platform-tools/adb devices

echo "no" | $ANDROID_HOME/tools/bin/avdmanager create avd -n test_android_emulator -k 'system-images;android-27;google_apis;x86' --force

nohup $ANDROID_HOME/emulator/emulator -avd test_android_emulator -no-snapshot > /dev/null 2>&1 & $ANDROID_HOME/platform-tools/adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed | tr -d '\r') ]]; do sleep 1; done; input keyevent 82'

echo "Emulator started" 

https://developercommunity.visualstudio.com/content/problem/340547/azure-devops-unable-to-run-the-android-emulator-on.html

https://github.com/MicrosoftDocs/vsts-docs/issues/1677#issuecomment-433858827

Topic(s): Link No comments - :(

How to comment in cmd.exe / Windows command line

Published 2018-11-14, 15:02

REM this is a comment

or

:: another comment

(Second one only works sometimes, see comment)

Topic(s): Link, Notiz, Technik 2 comments - :)

Results of an Amazon.de „Datenauskunft“ / „Data Subject Access Request“

Published 2018-09-18, 17:14

On 2018-07-20 I requested a „Datenauskunft“/“Data Subject Access Request“ from Amazon.de where I had an account since 2001. I wanted to document both the process and the results, as I couldn’t find anything online back when I requested this myself


To request a Datenauskunft from Amazon yourself you have to go to amazon.de/kontakt, then select „Digitale Inhalte und Services“, choose „Datenauskunft beantragen“ („Request your data“) in the first dropdown, „Datenauskunft für eine spätere Zusendung beantragen“ („Request data file be sent at a later date“) in the second and „Daten aus allen Kategorien anfordern“ („Request All Your Data“) in the third and last one.

Clicking „E-Mail“ will populate the contact form with some default text that you can just submit after adding your full name to the body text.


Shortly after I submitted the form, I got an email that I had reply to to confirm that I, the owner of my account email address, had indeed requested this data.

(I had to jump through a few hoops here as I use catch all email adresses @janpiotrowski.de for login, but replied from my usual email address which of course their system didn’t like. But as this is for verification of my request, I was happy to comply.)


I got a first response with data ~20 days later on 2018-08-09, which included a download link to a file name `Piotrowski.zip` (which was 138 KB of size).

The email also included the following admission:

Da wir für Ihre Anfrage Daten aus mehreren Bereichen zusammenstellen, gibt es einige zusätzliche Daten, die wir derzeit noch sammeln. Sobald Sie auch diese Daten herunterladen können, werden wir Sie entsprechend informieren.

The data file’s content:

1. DSAR_-_Jan_Piotrowski_-_Kontakthistorie.docx
2. DSAR_Anfrage_-_Jan_Piotrowski_-_Kontoinformationen.docx
3. Mein_Profil.xlsx

The `.docx` Word files contained 1) a history of (some recent) written communication with Amazon (including my data request of course), 2) all my current and past shipping addresses, payment methods, failed payments (just one…), watch list(s) content, vouchers used (lots), warranty claims (none), wish list(s) content, subscriptions (prime trial) and some information about Amazon Drive (never used) – really everything I could probably have gotten from the Amazon website myself after logging into my account there.

The `Mein_Profil.xlsx` just contained a link to www.amazon.de/profile.


Roughly one month later on 2018-09-17 I received another email from Amazon, telling me that now all data had been collected and could be provided to me.

Again it contained a download link to a file, this time called Jan Piotrowski.zip, now grown to 1043 KB (10.9 MB unzipped!).

This is its folder and file structure:

|   
+---- Ihr Kundenkonto -
|       DSAR - Piotrowski, Jan - Kontakthistorie.docx
|       DSAR_Anfrage - Piotrowski, Jan  - Kontoinformationen.docx
|       
+---Alexa
|   \---Alexa_0032...ea17a
|       +---Communication - Messages
|       |   \---Conversations
|       |           Conversations.html
|       |           
|       +---Communication - Preferences
|       |       alexa_comms_preferences.html
|       |       
|       +---Lists
|       |       Lists.csv
|       |       
|       +---Preferences
|       |       Preferences.json
|       |       
|       \---Routines
|               Routines.json
|               
+---Appstore
|       PurchaseDownloadInstall_Appstore.csv
|       
+---CloudDrive
|       Piotrowski_Jan_Account_Info.csv
|       Piotrowski_Jan_Client_Events.csv
|       Piotrowski_Jan_Device_Info.csv
|       Piotrowski_Jan_Node_Metadata.csv
|       
+---Community Profile
|       Mein Profil.xlsx
|       
+---Fire TV
|   |   3poptout_000.csv
|   |   appusage_000.csv
|   |   deviceusage_000.csv
|   |   marketingoptout_000.csv
|   |   registration.csv
|   |   
|   \---FireTV_Piotrowski_Jan
|       \---FireTV_Piotrowski_Jan
|               2018-01-23.json
|               2018-01-26.json
|               ...
|               FireTv-Glossary.csv
|               
+---Kindle
|   +---Geräte
|   |       appusage_000.csv
|   |       deviceusage_000.csv
|   |       marketingoptout_000.csv
|   |       registration.csv
|   |       
|   \---Inhalte
|       |   householdsharing.csv
|       |   KindleReadingActions.csv
|       |   ReadingSessions.csv
|       |   whispersync.csv
|       |   
|       \---digitalcontentownership
|               DigitalMusicTrack_B00...AG.json
|               DigitalMusicTrack_B00...DS.json
|               ...
|               KindleEBookSample_B00...W0.json
|               KindleEBookSample_B00...WA.json
|               ...
|               KindleEBook_B00...L0.json
|               KindleEBook_B00...W0.json
|               ...
|               KindlePDoc_2UEXX...TSOH.json
|               KindlePDoc_3753F...0766F1E.json
|               ...
|               MobileApp_B00...BO.json
|               MobileApp_B00...1W.json
|               ...
|               
+---Prime Music
|       DSAR_Jan_Piotrowski_Meine_Musikbibliothek.csv
|       
\---Prime Video
        Genderlanguage.csv
        Locationdata.csv
        Viewcounts.csv
        Viewinghistory.csv

As you can see this new package includes the same files as the first package in `/- Ihr Kundenkonto -` and `/Community Profile`, but also adds several other folders with my real usage data from Amazon:

The easiest way to look at the data was to just upload the whole folder to Google Drive. The previews of the `.csv` files is decent enough to get a quick overview.

Opening some of the files with Google Spreadsheet then enabled me to better format the available data and add columns to e.g. parse strange date format columns, sort by date columns and so on.


After a first look through the data files, they seem to be a pretty solid representation of what I expected to get from Amazon.

Some „logs“ are a bit short, only showing data from the last few months which is surprising, after I looked into those more I will possibly check with Amazon if there isn’t more that they just didn’t include by default.

I’m pretty happy with the time it took Amazon to compile this as well. Instant response and download or even self-serve would have been great, but I understand how much work it might be to compile all this data if there are not automated processes in place (yet). I expect this to get faster and better in the future.

Topic(s): Kram, Link, Notiz No comments - :(

Google Spreadsheet: Extract proper dates from funny date strings

Published 2018-09-18, 13:11

Recently Amazon sent me a „Datenauskunft“ that included a .csv of my Kindle’s Whispersync activity with a column containing dates, but in this funny format:

FriApr1412:21:01UTC2017
WedApr2610:43:54UTC2017
WedApr2611:30:41UTC2017
WedApr2611:30:40UTC2017
FriMay1119:26:44UTC2012
TueNov0819:47:36UTC2016
SunJan1022:12:49UTC2016

To get a proper date out of this, I built this formula to be used in a new column:

=DATE(RIGHT(C2;4);MATCH(MID(LEFT(C2;8);4;3);{"Jan";"Feb";"Mar";"Apr";"May";"Jun";"Jul";"Aug";"Sep";"Oct";"Nov";"Dec"};0);RIGHT(LEFT(C2;8);2))

`C2` being the column with the date here, this first extracts the year (4 characters from the right), then converts a 3 character month name to a number (which it got from extracting char 4-6 from the first 8 chars of the string), and finally the date which are the last 2 chars of the first 8 char block. The time is not needed here and discarded.


More variants:

WedApr2611:30:41UTC2017=DATE(RIGHT(C2;4);MATCH(MID(LEFT(C2;8);4;3);{"Jan";"Feb";"Mar";"Apr";"May";"Jun";"Jul";"Aug";"Sep";"Oct";"Nov";"Dec"};0);RIGHT(LEFT(C2;8);2))
31-DEC-2015 17:26:15=DATE(LEFT(RIGHT(F2;13);4);MATCH(MID(F2;4;3);{"Jan";"Feb";"Mar";"Apr";"May";"Jun";"Jul";"Aug";"Sep";"Oct";"Nov";"Dec"};0);LEFT(F2;2))
Sun May 08 09:47:24 UTC 2011=DATE(RIGHT(D2;4);MATCH(MID(LEFT(D2;8);5;3);{"Jan";"Feb";"Mar";"Apr";"May";"Jun";"Jul";"Aug";"Sep";"Oct";"Nov";"Dec"};0);RIGHT(LEFT(D2;10);2))
09/May/2011 11:49:10 UTC=DATE(MID(D2;8;4);MATCH(MID(D2;4;3);{"Jan";"Feb";"Mar";"Apr";"May";"Jun";"Jul";"Aug";"Sep";"Oct";"Nov";"Dec"};0);LEFT(D2;2))


Topic(s): Notiz 1 single comment - :/

Berlin focused Facebook Flohmarkt groups

Published 2018-09-14, 12:40

Any important ones missing?

Topic(s): Link No comments - :(

„Turn off“ output truncation of RSpec output

Published 2018-08-27, 19:16

Use this code to increase the output length before RSpec trancates output like e.g. `expected` and `got` in test failure results:

RSpec::Support::ObjectFormatter.default_instance.max_formatted_output_length = 1024

Topic(s): Technik No comments - :(

Measure command line program execution time on Windows

Published 2018-08-20, 10:05

Linux has the `time` command, which can be used to measure execution time of CLI commands:

$ help time

time: time [-p] PIPELINE
    Execute PIPELINE and print a summary of the real time, user CPU time,
    and system CPU time spent executing PIPELINE when it terminates.
    The return status is the return status of PIPELINE.  The `-p' option
    prints the timing summary in a slightly different format.  This uses
    the value of the TIMEFORMAT variable as the output format.

Example:

$ time sleep 2
real    0m2.009s
user    0m0.000s
sys     0m0.004s

Unfortunately this command doesn’t exist on Windows. But there are alternatives:

ptime.exe

ptime is a simple executable that you use to run your normal command, afterwards it will out output the execution time it measured:

Execution time: 7.844 s

Download it, put it in a place with a simple path and just prefix your command with the `ptime.exe`.

gnomon

A command line utility to prepend timestamp information to the standard output of another command. Useful for long-running processes where you’d like a historical record of what’s taking so long.

gnomon does a little bit more, as it times each line of the output of your command. But at the end it also outputs a total that can be used to measure the total execution time of your command, at the benefit of also knowing which lines took most of that time:

Total   6.5076s

You install it with npm running `npm install -g gnomon` and then just pipe your command to gnomon:

command | gnomon
Topic(s): Kram No comments - :(

Avoid accidentally closing programs on macOS with Cmd + Q

Published 2018-08-18, 15:24

I am a predominant Windows user. But as I dabble in mobile apps, I of course also have to use a Mac now and then.

This tended to be a pretty frustrating experience, as my Windows shortcut memory always uses Alt Gr + Q to type the @ sign in email addresses – which translates to Cmd + Q on Mac and closes your current program without any additional prompts.

My solution: System Preferences –> Keyboard –> Shortcuts –> Accessibility –> Invert colors: Activate and set to Cmd + Q.

No more accidentally closed programs when typing my email address 🙂

Topic(s): Technik No comments - :(

How to install Laravel (5.6) on Uberspace (7)

Published 2018-07-16, 12:57

Laravel is a great PHP framework, and Uberspace is a great and nice PHP host. Why not mix both?

Installing Laravel on shared hosting – which Uberspace still is despite the shell access etc. – can be difficult, especially because of public folder that is used to respond to requests instead of the project root.

Here are the basic instructions for getting a new Laravel project to respond to calls of your <username>.uber.space domain:

  1. Switch folder to /var/www/virtual/<username>.
  2. composer global require "laravel/installer" to install the Laravel Installer.
  3. Create a new Laravel project: laravel new <projectname>.
  4. Delete /var/www/virtual/<username>/html (make sure it is empty, or just rename it maybe) and replace it with a symlink from html to <projectname>/public.
  5. The default Laravel start page should now be available at your <username>.uber.space.

To be able to use the database in the project, you have to make some changes:

  1. Get your generated MySQL password from ~/.my.cnf.
  2. Update the .env file in your Laravel project:
    Username and database should be changed to your username, the password to the one you just retrieved.
  3. Caution: At the time of writing Uberspaces uses MariaDB 10.1.34 (find out by using the command mysql -v). Laravel needs some tiny changes if you work with MariaDB <10.2.2:
    • Edit the app/providers/AppServiceProvider.php file and add the following:
      use Illuminate\Support\Facades\Schema;
      public function boot()
      {
         Schema::defaultStringLength(191);
      }

      (Add both the method call and the import!)

  4. Now you can run php artisan migrate in your Laravel project to create the default tables.

Of course you probably don’t want to host your project at your uber.space domain, not create a new project but check out your already developed project from git, and also not use the default database – but I am sure you can find your way from here.

12 queries. 0,113 seconds.