Today I wanted to change the settings of one of my Auth0 accounts. When hitting the save button, I got the error message “Invalid grant types: client_credentials”
The error was pretty confusing since I was not changing anything related to grant types.
After clicking around a bit, I notice the new tab for “Grant Types” in the “Advanced Settings” section. In this section, “Client Credentials” was checked but the option was disabled.
Simply de-selecting the option was not possible because it was disabled. Luckily, unchecking any other option, also updated “Client Credentials”.
Afterwards, I added the other option back and as soon as “Client Credentials” was unchecked, everything started working again.
Running unit test on a TeamCity is usually not a big problem but a couple of days ago I noticed I very strange behaviour.
The number of tests executed on the build server was much smaller than the number of tests running on my local machine but TeamCity did not report any errors. After digging into the build log, I noticed this exception in the log files:
System.Runtime.Serialization.SerializationException: Unable to find assembly 'FakeItEasy, Version=22.214.171.124, Culture=neutral, PublicKeyToken=eff28e2146d5fd2c'.
The only noticeable difference between the execution on TeamCity and my local machine was that locally I used the ReSharper test runner and on the build server xunit.console.exe was used to run the test.
When trying to run the tests with xunit.console.exe on my local machine I could reproduce the error.
For some unclear reason, XUnit was not able to load the FakeItEasy assembly that was located in the bin folder of the test assembly. The only workaround I found was to copy the FakeItEasy assembly to the folder of xunit.console.exe. After doing this, the number of tests on my local machine and on TeamCity matched again.
This is not really a satisfying solution, but I couldn’t find a better solution. Did someone find a proper solution for this problem?
Recently our Visual Studio Team Services releases started to fail with the message
File not found: 'C:\Agent\_work\_tasks\AzureRmWebAppDeploy_497d490f-eea7-4f2b-ab94-48d9c1acdcb1\2.1.3\azurermwebappdeployment.js'
We changed nothing and the behaviour was pretty confusing. When checking the directory mentioned in the error message, the file was obviously not there but there was a zip archive “task”. This archive contained the script was well as some other folders.
Unzipping the archive in the folder did the trick and releases started working normally again. The correct folder should look like this:
Recently I was opening a Angular2 application that was using the WebPack Sass loader to bundle the Sass files. Conveniently Visual Studio performs “npm install” to load all the npm packages.
When running “npm start” I got the following error:
Node Sass could not find a binding for your current environment: Windows 64-bit with Node.js 6.x
Found bindings for the following environments:
- Windows 32-bit with Node.js 5.x
This usually happens because your environment has changed since running `npm install`.
Run `npm rebuild node-sass` to build the binding for your current environment.
Somehow Visual Studio installed the wrong binaries for node-sass. The fix was quite simply – just run “npm rebuild node-sass” as stated in the error message.
Did anyone else experience the same behaviour?
RavenDB becomes more and more popular among .NET developer. The simple to use C# Api makes it a good choice for many project that require a fast persistence layer.
The JSON that is stored, can be customize using JSON.NET attributes. But sometimes you annotate your data POCOs and nothing happens e.g.
- You put [JsonIgnore] on a property but it still appears in the created JSON
The reason is simple: The RavenDB team decided to put a copy of JSON.NET in the RavenDB dll and only the attributes from this “special” namespace are used for the JSON handling. So if you want to customize the JSON in the database, always use attributes from the Raven.Imports.Newtonsoft.Json namespace.
Recently, I installed a new laptop with Visual Studio and the “normal” tool chain. While doing this, I encountered a strange error when installing the .NET Core tools.
Setup has detected that Visual Studio 2015 Update 3 may not be completely installed. Please repair Visual Studio 2015 Update 3, then install this product again.
However, repairing Visual Studio did not solve the problem. Luckily, I found this StackOverflow post. The problem could easily be solved by starting the setup on the command line with this command:
Angular2 requires a bunch of files to setup and things to configure. To avoid unnecessary typing and copying of files the angular guys created angular-cli which allows bootstrapping a new Angular2 application with just one line:
ng new my-fancy-app
When installing the angular-cli package using:
npm install -g angular-cli
I encountered a strange error message that stated:
ng: command not found
There is this discussion on Github but all comments there did not solve the issue for me. Finally I noticed that the ng-command is not linked in /usr/local/bin. The fix for my problem was to add it via:
ln -s /usr/local/Cellar/node/6.3.1/lib/node_modules/angular-cli/bin/ng /usr/local/bin/ng
Bamboo is a great build server and the possibility to use EC2 instances as build agents makes it really cost efficient and flexible. But – most of the time, the stock images provided by Atlassian need to be customized to fit the purpose. But how to do this properly?
The easiest way is to create a new AMI based on the stock images and customize it.
- Launch a new instance using on of the existing AMIs (e.g. ami-ed6deb9e for the Ubuntu stock image) or use an instance launched by Bamboo
- Connect to it using SSH and customize it.
- Open the EC2 management console, select the instance and choose Actions -> Image -> Create Image
- Afterwards you need to enter a name and choose Create Image
- Copy the AMI id
- Switch back to Bamboo and navigate to Bamboo administration -> Image configurations
- Create a new configuration using the AMI id you copied before
- You can now launch build agents with you custom setup
I was setting up a static website on Amazon S3. This process is fairly simply. Finally I wanted to create an user that can only deploy this one single bucket. As with all other user accounts I wanted to follow the least privilege model. So the default S3-Full Access policy was not an option for me.
I created a new policy granting full access to this specific bucket. It looked like this:
I assigned this to the user that uploads my site and started the upload. Peng! Access Denied.
After some investigation I discovered that the ListAllMyBuckets action is causing that problem. I added a second policy:
This solved my issue and the upload work fine.
Today I had a SharePoint installation that started to throw errors when opening the User Profile Service Application. After examining the ULS log I noticed this log entry:
Requested registry access is not allowed.
So something with the permissions in the registry seemed to be wrong. I manually browsed through the registry settings and compared them to another, working installation but I couldn’t find anything. Luckily I stumbled across this cmdlet: Initialize-SPResourceSecurity I ran the cmdlet and everything started to work fine again.