Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
Something like this maybe?RewriteCond %{REQUEST_URI} !^/$ RewriteCond %{REQUEST_URI} !^/index.(php|html?) RewriteRule ^(.*)$ profile/company-profile.php?cid=$1 [NC,L]
In my page, when I write the url domain.com/abc it uses the htaccess RewriteRule ( posted below) and opens the company-profile.php page, showing the ABC profile.ABC IS AN EXAMPLE. IT MAY BE ANYTHINGHowever, even I have a domain.com/index.php file, when I write just domain.com and hit enter, it takes me to thecompany-profile.phppage where it supposed to show theindex.phpfileMy question is how can I fix this ?RewriteEngine On RewriteRule ^([a-z0-9]+)?$ /domain.com/company-profile.php?cid=$1 [NC,L]
Rewrite Rule don't let me access my domain's index.php
According to the description of Amazon S3 Backup extension, it is an upcoming paid feature: Planned features: * Schedule subscription-level backups to be stored in the cloud and configure a rotation policy for old backups. This is a paid feature. Expect it to be released soon. The same goes for Microsoft OneDrive Backup.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 3 years ago. Improve this question I've installed extensions such as Amazon S3 and Microsoft OneDrive and configured them to be "Ready for use". But I cannot find options to configure the scheduled backup using any of them. Screenshots: Remote storage configured: [email protected] There's no remote storage options in scheduled backup: [email protected] But there is the remote storage options in "Back up now": [email protected] Is there an old thread here or can you guide me on how to solve the issue? Thanks.
Unable to configure cloud storage for scheduled backup in Plesk Onyx [closed]
0 It is possible for the problem to be related to selinux (as it was in my case today). Try to run setsebool httpd_can_network_connect_db on Or if that doesn't work: setenforce 0 Share Follow answered Dec 11, 2015 at 0:33 VelDevVelDev 16011 gold badge22 silver badges77 bronze badges 2 If OP can connect locally it's unlikely a selinux problem. – Déjà vu Dec 11, 2015 at 1:42 @ringø I've had the same issue today. I could connect locally, but not to any remote mysql server. The issue was finally solved by the command above. (centos 6) – VelDev Dec 11, 2015 at 7:01 Add a comment  | 
I have a centos server running nginx + php-fpm that will not connect to an external database for the purpose of a wordpress install. I can ssh into the webs erver and run mysql to connect to the external mysql database fine but when trying to us php to connect to the database it fails. Where should I look to resolve this issue?
PHP-FPM on nginx will not connect to an external MySQL
Don't know what Laravel auditing is, but my good guess is, your first one is a object, so you can apply your audit() method on it directly, but the second one, $scores, is a collection of objects, you surely can not directly apply a method call on it, try to iterate it, should be fine.
I'm using laravel auditing,Linkand I have use it with my controller and it was working fine, now my problem is when I apply it to another controller is does work, is it only allowed to use once? my method all the same I'm just confuse why it doesn't work.First controller code(WORKING FINE)$leads = Lead::findOrFail($id); $audit=Lead::findOrFail($id)->audits()->with('user')->get()->last();Second Controller code:(Not Working Error: Method audits does not exist.)$scores = Score::with(['lead','subject'])->where(['subject_id'=>$id])->get(); $audit = $scores->audits()->with('user')->get()->last();
Laravel Auditing BadMethodCallException Method audits does not exist
i want to remove all files from my git repository and i will update new code at the Repository.Is that better or I have to create new fresh repositoryIt is easier to, on the GitHub website:rename the current repore-create a brand new repo (reusing the same name as the one you just renamed, if you want)clone that new repoadd your data in it, commit and push
The goal is to delete all file from my git repository: I want to remove all files from my git repository and I will update new code at the Repository.How would I do that, knowing I am usingsourcetreewindows app?
Delete all files from git repository using Sourcetree
Container runtimes are architecture aware & container registries support defining images for multiple architectures. Docker automatically pulls the correct image for the platform it's running on.https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/
Runningdocker run -it -v $PWD:/tmp k8s.gcr.io/kube-proxy:v1.15.1 cp /usr/local/bin/kube-proxy /tmp file kube-proxygives a different result depending on which architecture I am on e.g. on CoreOSContainer Linux by CoreOS stable (2135.5.0) core@node1 ~ $ file kube-proxy kube-proxy: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, strippedand on HypriotOSHypriotOS/armv7:[email protected]in ~ $ file kube-proxy kube-proxy: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, strippedHow is this working?
Why does the k8s.gcr.io/kube-proxy Docker image 'work' on multiple architectures?
Once PVC/PV are created (https://kubernetes.io/docs/concepts/storage/persistent-volumes/), there are number of possible solutions.For specific question, options 1 and 2 will suffice. Listing more for reference, however this list does not try to be complete.Simplest and native,kubectl cp:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cprsync- still quite simple, but also robust. Recommended for a task (both of below options were tested)TO pod:https://serverfault.com/questions/741670/rsync-files-to-a-kubernetes-podFROM pod:https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.htmltar, but incremental:https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-podTools for synchronisation, backup, etcFor example,https://github.com/backube/volsync
I want to add or copy files into persistent volume and then use it in container using volume mount ?any help
Copy files in persistent volume kubernetes
+50By default, the DirectoryIndex is set to:DirectoryIndex index.html index.htm default.htm index.php index.php3 index.phtml index.php5 index.shtml mwindex.phtmlApache will look for each of the above files, in order, and serve the first one it finds when a visitor requests just a directory. If the webserver finds no files in the current directory that match names in the DirectoryIndex directive, then a directory listing will be displayed to the browser, showing all files in the current directory.The order should beDirectoryIndex index.html index.php// default is index.htmlReference:Here.
I have the following line in my .htaccess file:DirectoryIndex index.html index.phpEverytime I go to index.php it takes me to index.html. Is it possible to allow for both, but leave index.html the default for users visiting www.domain.com?
Make index.html default, but allow index.php to be visited if typed in
Gists can be embedded with JavaScript: You can embed a gist in any text field that supports Javascript, such as a blog post. To get the embed code, click the clipboard icon next to the Embed URL of a gist. Paste the <script> tag copied from the Gist into a web page hosted on your server, e.g. <!DOCTYPE html> <html> <head> <title>To do list</title> </head> <body> <h1>To do list</h1> <script src="https://gist.github.com/user/gist_id.js"></script> </body> </html>
I have created a "todo" Gist using GitHub Flavored Markdown on GitHub. Is there any way to host it on my online DigitalOcean server?
How to host my github-flavored-markdown gist as a webpage
The short answer - yes, there is.You can add a block inside a fence marked with "suggestion":```suggestion throw new Exception("awesome!"); ```This will create a suggestion with code that can be applied automatically:For additional details, see GitHub's documentation aboutcommenting on a pull requestandincorporating feedback.
Is there a commonly used way to add small edits as code suggestions that the author of a PR can either accept or decline? Something similar to the way a Google Doc allows you to "Suggest Edits"I'd like to speed up code reviews, and I think this would be a great teaching tool.
Is there a way to suggest edits to a PR?
I can not understand how do we associate an instance of an object with the pressure it adds. The instance of the object associates the pressure it adds with a reference to itself by calling AddMemoryPressure. The object already has identity with itself! The code which adds and removes the pressure knows what this is. I do not see object reference being passed to the GC.AddMemoryPressure. Correct. There is not necessarily an association between added pressure and any object, and regardless of whether there is or not, the GC does not need to know that information to act appropriately. Do we associate the added memory pressure (amp) with an object at all? The GC does not. If your code does, that's the responsibility of your code. Also, I do not see any reasons in calling the GC.RemoveMemoryPressure(m_size) That's so that the GC knows that the additional pressure has gone away. I see no way how the amp could affect the GC It affects the GC by adding pressure! I think there is a fundamental misunderstanding of what's going on here. Adding memory pressure is just telling the GC that there are facts about memory allocation that you know, and that the GC does not know, but are relevant to the action of the GC. There is no requirement that the added memory pressure be associated with any instance of any object or tied to the lifetime of any object. The code you've posted is a common pattern: an object has additional memory associated with each instance, and it adds a corresponding amount of pressure upon allocation of the additional memory and removes it upon deallocation of the additional memory. But there is no requirement that additional pressure be associated with a specific object or objects. If you added a bunch of unmanaged memory allocations in your static void Main() method, you might decide to add memory pressure corresponding to it, but there is no object associated with that additional pressure.
What mechanisms of the C# language are used in order to pass an instance of an object to the GC.AddMemoryPressure method? I met the following code sample in the CLR via C# book: private sealed class BigNativeResource { private readonly Int32 m_size; public BigNativeResource(Int32 size) { m_size = size; // Make the GC think the object is physically bigger if (m_size > 0) GC.AddMemoryPressure(m_size); Console.WriteLine("BigNativeResource create."); } ~BigNativeResource() { // Make the GC think the object released more memory if (m_size > 0) GC.RemoveMemoryPressure(m_size); Console.WriteLine("BigNativeResource destroy."); } } I can not understand how do we associate an instance of an object with the pressure it adds. I do not see object reference being passed to the GC.AddMemoryPressure. Do we associate the added memory pressure (amp) with an object at all? Also, I do not see any reasons in calling the GC.RemoveMemoryPressure(m_size);. Literally it should be of no use. Let me explain myself. There are two possibilities: there is an association between the object instance or there is no such association. In the former case, the GC should now the m_size in order to prioritize and decide when to undertake a collection. So, it definitely should remove the memory pressure by itself (otherwise what would it mean for a GC to remove an object while taking into an account the amp?). In the later case it is not clear what the use of the adding and removing the amp at all. The GC can only work with the roots which are by definitions instances of classes. I.e. GC only can collect the objects. So, in case there is no association between objects and the amp I see no way how the amp could affect the GC (so I assume there is an association).
What mechanisms of the C# language are used in order to pass an instance of an object to the `GC.AddMemoryPressure` method?
I've setup a PTR record for my EC2 instance following this articleYou can't use these instructions for IP addresses owned/controlled by AWS. The only AWS-allocated public IP addresses that are configurable with custom reverse-DNS are elastic IP addresses, and a different process applies (from the same document) --If you are using an Elastic IP address for your server, you can configure the reverse DNS record of your Elastic IP address by submitting aRequest to Remove Email Sending Limitations(root account credentials required), and you don't need to use Amazon Route 53.The instructions you followed are for IP address space that you control, or that has been delegated to you by your ISP. They are not applicable to elastic IP addresses. You "don't need to use Route 53," in this case, would have been more correctly written here as you "can't use Route 53."Allocate an elastic IP and map it to the server... then you can use the request form and AWS support will configure the reverse records for you.Public IP addresses that are not EIPs are ephemeral. Once you stop the instance, the address goes back to the pool. Starting the instance again will cause it to be assigned a different public IP address. This isn't the case with EIPs, which would be more suited to a permanent fixture like an SMTP server.
I've setup a PTR record for my EC2 instance following this article:https://aws.amazon.com/premiumsupport/knowledge-center/route-53-reverse-dns/. but when I test the rDNS with a tool like dig it keeps giving me the xxx.compute.amazonaws.com domain as a result. I have waited several times the refresh time and performed the steps in the article multiple times but the record does not change. I have also set the NS record for the "in-addr.arpa" hosted zone to match the NS record of my domain.My setup is:Hosted zone 1: "domain.com." Hosted zone 1 A record: name "domain.com." value "1.2.3.4" Hosted zone 2: "3.2.1.in-addr.arpa." Hosted zone 2 PTR record: name "4.3.2.1.in-addr.arpa." value "domain.com"Am I missing something here? Are there any other steps I should take or do you have any tips on how I can further debug this?It seems like outlook.com keeps flagging my messages as spam because the rDNS is incorrect.Any help is very much appreciated.
PTR record for EC2 instance (without elastic ip) not propagating
__init__, unlike __new__, doesn't deal with memory references at all. self is already a valid object produced by __new__ which __init__ initializes. You can do it the other way round for clarity: class Person: def __init__(self, name: str, age: int): # Signal to whoever will read your code # that the class is supposed to have these attributes self.name = None self.age = None self.update(name=name, age=age) def update(self, name: str, age: int): self.name = name self.age = age Now __init__ isn't called by other methods, so no confusion arises.
Reusing the __init__ method to change attributes' values Python 3.10 on Windows 64bit Let's say I have a class that has an update method which calls the __init__ method. The update method is created to avoid multiple line assignments of the attributes. class Person: def __init__(self, name: str, age: int): self.name = name self.age = age def update(self, **attributes): self.__init__(**attributes) Object instantiation: p = Person(name="Alex", age=32) print(p.name, p.age) >> Alex 32 Object reference before: print(p) >> <__main__.Person object at 0x0000028B4ED949A0> Calling for the update method: p.update(name="Sam", age=80) print(p.name, p.age) >> Sam 80 Object reference after: print(p) >> <__main__.Person object at 0x0000028B4ED949A0> Clearly, the reference hasn't changed! Is this safe? Is there a chance where the object reference in memory gets changed? Obviously the actual use of this is for large objects that has multiple parameters and get frequently modified at once as internal reactions. Some parameters are optional and don't get modified. If I don't do it like this, I would have to write a cascade of if-else statements which I don't want to. :)
Updating an instance of an object with the __init__ method
It could be because the new committed image has lost itsCMDdirective that was present inrocker-org/rocker/rstudio/Dockerfile#L58.CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d /supervisord.conf"]Try and create a new Dockerfile:FROM michael91/ms:v1 ## Add RStudio binaries to PATH ENV PATH /usr/lib/rstudio-server/bin/:$PATH ENV LANG en_US.UTF-8 EXPOSE 8787 CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]And build it asmichael91/ms:v2.Then see v2 works better than v1 when it comes to activating RStudio:docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE michael91/ms:v2
I'm trying to use Rstudio on a DigitalOcean server using theRstudio docker. Since my experience with linux servers is limited, it's been a bit of a challenge for me.I'm able to get Rstudio up and running with:docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE rocker/hadleyverseHowever, I'd like to be able to shut down the server and save it to a snapshot when I'm not using it, but not have to re-install packages each time I do so.Using the thedocker documentation on updating an image, I am able to create a container, install packages on that container, and then commit the changes:docker run -t -i rocker/hadleyverse /bin/bash install.r randomForest exit docker commit \<CONTAINER_ID> michael91/ms:v1However, once I make the commit, I am unable to run the updated image properly. I try and run it as follows:docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE michael91/ms:v1When I do so, Rstudio server is not activated, as it is when I run the original rocker/hadleyverse version. I've tried making commits with and without installing packages; either way it doesn't seem to work. Obviously I'm doing something incorrectly, but I'm not sure what. If anyone could offer me some guidance, I'd really appreciate it.Edit: Thanks a lot VonC; that did the trick.
Installing packages for Rstudio Docker
Have you tried rdocker? It seems to do exactly what you are looking for. Cheers
I want to use docker machine with a remote server docker daemon through ssh so no need to open 2376 port in the remote server. Local Host: $ docker-machine create --driver generic --generic-ip-address [IP_Address] --generic-engine-port 2376 --generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user root [Host] Remote host: $ docker daemon -H tcp://127.0.0.1:2376 Result of executing the Local Host command: $ docker-machine create --driver generic --generic-ip-address [IP_Address] --generic-engine-port 2376 --generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user root [Host] ... Cannot connect to the Docker daemon. Is the docker daemon running on this host? As per nmap remote port 2376 is closed, so the error makes sense. I have tried tunneling through ssh by executing the following in my local host: $ ssh -L 2376:127.0.0.1:2376 [Remote_Host] ** Note docker machine is trying to reach docker daemon in the remote host, so the tunnel is useful ** I thought maybe using ssh -R or a combination of both would work but I have not been able to make it work yet, do you have any idea or workaround to make this work? Do not hesitate to bring me to a completely different approach to solve this. Thanks in advance.
Docker-machine access to remote docker daemon through ssh tunneling
... However, the code on the server was not cloned from the repo, does not have a .git file, and is different from the code in the repository What I would like to do is add the production code to the existing repo as a new branch. Its pretty simple. On your server inside your code folder make it a git project # convert the folder to a git repository git init # commit your local changes to a new branch git checkout -b <branch name> git add . git commit -m "Initial commit" Now once its a git repo add a remote to the repository. git can have multiple remotes. # add the repository URL git remote add origin <git hub url> # "download" all changes from the repository git fetch --all --prune At this point you have all your changes in local branch and you have all the original repo code on your file system. Now you have to combine the 2 # choose the desired branch git branch -a # merge the desired branch code into your branch. # since its unrelated history you can simply merge it you have # to use cherry-pick git rev-list --reverse master | git cherry-pick -n --stdin In my case i have conflicts which you will also have since you worked on the original code. Fix those conflicts and commit and you are ready to go.
I recently took over a project which has a Git repository hosted on GitHub and is running on a production server. However, the code on the server was not cloned from the repo, does not have a .git file, and is different from the code in the repository. What I would like to do is add the production code to the existing repo as a new branch. How can I do that?
Add different copy of codebase to existing git repository
I must set limits because I must pay for my usage to the cloud provider.In such a case, I would recommend you to useVertical-Pod-AutoScaleralong with aLimit Range.ALimitRangeprovides constraints that can:Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.And the VPA will try to cap recommendations between min and max oflimitRangesbased on the current usages.N.B.:Make sure you havemetrics-serverinstalled in your cluster to enable the VPA.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed2 years ago.Improve this questionAssume I have a pod with 3 containers:X,Y, andZ.K8Scan set a cpu limit for each container in a pod. However, if I set1000MCPU limit to each container, then any container cannot use more than1000MCPU even if the other two are ilde, which is not what I want.I want to set a CPU quota of3000Mto the pod, rather than to each container. For example, ifX&Yare idle,Zcan use3000MCPU; ifXis using1500MCPU,Yis using1000MCPU, thenZcan only use500MCPU.So, my question is:How to share a CPU quota among multiple containers?
How to share a CPU quota among multiple containers? [closed]
In your location block: location /ds { rewrite ^/ds/(.*)$ /$1 break; ... proxy_pass ... } URIs which begin with /ds/ will match the regular expression and be rewritten without the initial /ds. However, the URI /ds does not match the regular expression and will be passed to the upstream application as /ds. There are a number of ways to fix the problem, but the simplest solution is to make the second / in the regular expression optional by adding a ? operator. For example: rewrite ^/ds/?(.*)$ /$1 break;
I have a strange problem with using a nginx as a reverse proxy for my Zeppelin instance. I will try to describe the problem below. I am using an EC2 instance as the reverse proxy to access the Zeppelin instance. Just a note in front of that is an AWS ALB sitting as "forward proxy", this way I can use a friendly URL's for exposing the UI's. The path based routing on the AWS ALB is configured correctly. The request is coming to AWS ALB with domain subdomain.domain.com/ds where I am using a path based routing to match all requests that are hitting the /ds as a path to my target group. The incoming request is then passed to Nginx instance, which is working well. The problem is that if I am using a URL without trailing slash, the Nginx simply timeouts. The configuration is below: # Zeppelin server { listen 541; location /ds { rewrite ^/ds/(.*)$ /$1 break; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://10.10.10.10:8890/; proxy_redirect http://10.10.10.10:8890/ $scheme://$host/ds; } location /ds/ws { proxy_pass http://10.10.10.10:8890/ws; proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection upgrade; proxy_read_timeout 86400; } } Also, bellow is the most simple example that I am using for RStudio. server { listen 542; location /ds { rewrite ^/ds/(.*)$ /$1 break; proxy_pass http://10.10.10.10:8787; proxy_redirect http://10.10.10.10:8787/ $scheme://$host/ds/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_read_timeout 20d; } } In case that trailing slash is not provided, I am getting /ds not found
Nginx location without trailing slash works only with trailing slash
I solved the problem as follows:For those who have admin rights on a PC, there is a working solution for auto-selecting a certificate:for MAC enter in terminal defaults write com.google.Chrome AutoSelectCertificateForUrls -array-add -string '"pattern":"your_url","filter":{"ISSUER":{"CN":"certificate name"}}}'⁣In Windows, you need to add the following example to the registry HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\AutoSelectCertificateForUrls\1 = ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣"{"pattern":"your_url","filter":{"ISSUER":{"CN":"certificate name"}}}"*/
I am writing autotests using Maven+Selenium+JUnitxz. During UI testing, I can't configure certificate auto-selection so as not to click on the window manually.I have already tried ChromeOptions and ignoring the certificate, it is not working. I found a suggestion for creating a file /Library/Preferences/com.google.Chrome.plist, but when running ChromeDriver, this file disappeared. The content of the file is as follows:<plist version="1.0″> <dict>  ⁣ ⁣ <key>AutoSelectCertificateForUrls</key>  ⁣ ⁣ ⁣ <array>  ⁣ ⁣ ⁣ ⁣ ⁣ <string>{"pattern":"your_url","filter":{"ISSUER":{"CN":"certificate name"}}}</string>  ⁣ ⁣  ⁣</array> </dict> </plist>
Selenium testing JAVA - I can't configure certificate auto-selection for a specific URL when opening ChromeDriver on MAC
With regular expressions, * means any numberof the previous atom, which will match /docs and /docs/. Try this:RewriteEngine on RewriteRule ^docs$ http://www.domain.com/ [R=301,L,QSA] RewriteRule ^docs/(.*) http://www.domain.com/$1 [R=301,L,QSA](QSAis query string append, so/docs/foo?bar=bazwon't lose the?bar=baz.)
I want accesses to e.g. www.thisdomain.com/docs/path1/path2 to redirect to www.thatdomain.com/path1/path2(Note that docs is not a part of the new path)I have the following on www.thisdomain.com:RewriteEngine on RewriteRule ^docs/* http://www.domain.com/ [R=301,L]If I access www.thisdomain.com/docs, it directs to www.thatdomain.com, but if I access a child-path like www.thisdomain.com/docs/path1/path2 it fails. Is it possible for the redirect to intercept the child-path access and redirect as I need? If so, any pointers?Thanks.
.htaccess 301 redirect path and all child-paths
You should try with Variables="{KeyName1=string,KeyName2=string}"
I am trying to use environment variables for my lambda but when I run the following Lambda AWS CLI create-function command from a gradle task that runs a shell script, aws $PROFILESTR lambda create-function \ --region us-east-1 \ --function-name MyLambda \ --zip-file fileb://$ZIP \ --role arn:aws:iam::$AWS_ACCT_ID:role/my_lambda \ --handler com.test.MyLambda::handleRequest \ --runtime java8 \ --description "Lambda description..." \ --memory-size 256 \ --timeout 45 \ --environment Variables={DEV_URL=dev,PROD_URL=prod} it gives me this message and doesn't create the lambda function :my-lambda:createLambda usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help Unknown options: Variables=PROD_URL=prod :my-lambda:createLambda FAILED If I remove the second variable (declare only one env variable), it creates the function just fine. From the lambda create-function cli documentation, the environment option accepts multiple variables: --environment (structure) The parent object that contains your environment's configuration settings. Shorthand Syntax: Variables={KeyName1=string,KeyName2=string} I should be able to create the lambda through a script and not through AWS console (release should be automated). What am I missing here or what am I doing wrong?
AWS lambda create-function accepts only one environment variable
Check following things:Do you really have permission to push (if the repository is not read only)Do you really copied the right public key to the server, there is no unnecessary spaces or newlines
So I'm trying to push a file to GitHub and I'm getting this error:sh-3.2# git push -u origin master SecGenericPasswordCreate failed Enter passphrase for key '/var/root/.ssh/id_rsa': Connection closed by 207.97.227.239 fatal: The remote end hung up unexpectedlyI have already setup a public key... but when I try to push, I get this error. What am I doing wrong?
Git error on push
I expect the problem is right here:mysite.com:tmp/$USER--tmp/is a relative path, relative to the current working directory. When your code is executed viacrond(8), yourcwdmight be different than when you execute it by hand.As@netcoder points out in his comment, absolute paths are the best way to work with scripts / programs executed out ofcrontab(5)files.
Edit: Updated to reflect some answersI have this script, test.sh on my home computer:Note: $USER = john#!/bin/bash /usr/bin/scp -q[email protected]:/home/$USER/tmp/$USER /home/$USER/tmp/ > /dev/null 2>&1 error_code="$?" if [ "$error_code" != "0" ]; then #if file NOT present on mysite then: echo "File does not exist." exit fi echo "File exists."Now, lets say I create the file on the server mysite.com like so:echo > tmp/$USERNow, when I run the above script on my desktop manually, like so:./test.shI get the result "File exists."But if I run it via crontab, I get the result "File does not exist" My crontab looks like this:* * * * * /home/user/test.sh >> /home/user/test.log 2>&1I've spent all day trying to check the logic and everything... I can't seem to figure out why this is so. Thanks for all your help in advance :)Edit: scp looks in mysite.com:/home/$USER/tmp/ dir The $USER on my desktop and the server are same. So I don't think it's an issue of relativeness. If I were tossh[email protected]and then dols tmp/I'll see the file there.Also, the crontab entry is in my crontab, not another users' or root's crontab.@Jonathan: I've set up key based authentication. No password required!@netcoder: In my log file, I see repeated lines of "File does not exist."@sarnold: in my scp line, I've put[email protected], just to make sure that cron uses john's account on mysite.com when crond runs the script. Still, same result.
script behaves differently via cron
What version of Az.Accounts is being loaded? If it is 2.0.0-preview the DevOps task will fail. You can check for it using Get-InstalledModule Az.Accounts -AllVersions If it is the case use: Uninstall-Module -Name Az.Accounts -RequiredVersion 2.0.0-preview -AllowPrerelease to remove the preview then add the current version: Install-Module -Name Az.Accounts -RequiredVersion 1.7.0 I have no idea why the preview gets installed but it has plagued me for a while...
I install PowerShell and Az module in container based on ubuntu:16.04 RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \ wget https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb && \ dpkg -i packages-microsoft-prod.deb && \ apt-get update -y && \ apt-get install powershell -y && \ pwsh -c "Install-Module -Name Az -Force" It works fine when I ssh to Docker running on my machine, ..but fails with error "Could not find the module Az.Accounts with given version" when executed in Azure DevOps pipeline: Any ideas how to fix?
"Could not find the module Az.Accounts with given version" error when running Azure DevOps job in Docker
The events can be listed using the following snippet. You can then process the pod events as needed.label := "" for k := range pod.GetLabels() { label = k break } watch, err := clientset.CoreV1().Pods(namespace).Watch(metav1.ListOptions{ LabelSelector: label, }) if err != nil { log.Fatal(err.Error()) } go func() { for event := range watch.ResultChan() { fmt.Printf("Type: %v\n", event.Type) p, ok := event.Object.(*v1.Pod) if !ok { log.Fatal("unexpected type") } fmt.Println(p.Status.ContainerStatuses) fmt.Println(p.Status.Phase) } }() time.Sleep(5 * time.Second)
I am creating a pod in k8 client go and making a watch to get notified for when the pod has completed so that i can read the logs of the pod. The watch interface doesnt seem to provide any events on the channel. Here is the code, how would I get notified that the pod status is now completed and is ready to read the logsfunc readLogs(clientset *kubernetes.Clientset) { // namespace := "default" // label := "cithu" var ( pod *v1.Pod // watchface watch.Interface err error ) // returns a pod after creation pod, err = createPod(clientset) fmt.Println(pod.Name, pod.Status, err) if watchface, err = clientset.CoreV1().Pods(namespace).Watch(metav1.ListOptions{ LabelSelector: pod.Name, }); err != nil { log.Fatalf(err.Error()) } // How do I get notified when the pod.Status == completed }
Watch kubernetes pod status to be completed in client-go
If you already have createda gitHub repository and installed git on your computer, you have to start the command prompt in your project directory in eclipse workspace and type:$ git initThen you'll need to get the correct URL for your repository on GitHub and type:$ git remote add origin https://github.com/[username]/[reponame].gitNow, add all files to your local commit:$ git add . # this adds all the filesThen make an initial commit:$ git commit -a -m "Initial commit"Finally, to put it on the remote, do:$ git push -u origin --all
I really need to upload a project to GitHub to Eclipse in the next five minutes. I right click on the project and go to "Team", but none of the options are "Commit".Thanks, Lucas
Push project to Git not working?
I think this does do what you want;http://codesanity.net/2009/11/conditional-htpasswd-multienvironment-setups/http://tomschlick.com/2009/11/08/conditional-htpasswd-multi-environments/https://tomschlick.com/2009/11/08/conditional-htpasswd-multi-environmentsCorrect address for the resource as of 2022/01/15.https://tomschlick.com/conditional-htpasswd-multi-environments/ShareFolloweditedJan 15, 2022 at 13:48TuxmAL5311 silver badge88 bronze badgesansweredFeb 6, 2011 at 11:46GerbenGerben16.8k66 gold badges3838 silver badges5656 bronze badges31The domain codesanity.net is dead - this answer is no longer useful.–Ulrik H. KoldJan 10, 2014 at 13:20Thanks for the heads-up @UlrikH.Kold . Changed the url to the new domain.–GerbenJan 10, 2014 at 15:052@JRameshFernandez link is now fixed.–GerbenFeb 19, 2017 at 16:11Add a comment|
I'm sure this is possible, but its beyond my meager abilities with .htaccess files.We have an internal php app that we use, we have basic security internally, but dont need to worry too much. I would like to make it available online for use when staff are out and about. I would like to have additional security based on htaccess or htpassword files.Is it possible to write a htaccess file that does the followingIf user is accessing from office.mydomain.com it means they are internal (office.mydomain.com resolves to an internal ip like 192.168.22.22) so allow unimpeded accessIf the user is accessing from outside it will be external.myoffice.com - if this is the case as an added bit of security I would like to use .htaccess and a password file to get the user to enter an apache password.Can anyone tell me how to write this with .htaccess file?Update: Thanks for all the answers, I have posted what worked for me as an answer to help others.
Use .htaccess to restrict external access to my Intranet
this project on github provides a template for what i'm trying to do which works fine: https://github.com/cagataygurturk/aws-lambda-java-boilerplate
when code is executed in amazon aws lambda, my @autowired spring dependencies are null. Makes sense if there is no context being loaded, but i thought SpringBeanAutowiringSupport would help. How do you inject dependencies correctly in amazon lambda? This is my code that has null autowired fields but otherwise works fine (if i replace autowired with new: @Component public class ApplicationEventHandler { @Autowired private Foo foo; public ApplicationEventHandler() { logger.info("I'm sure the constructor is being called"); SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(this); //doesn't seem to help } public void deliveryFailedPermanentlyHandler(SNSEvent event, Context context) throws IOException { foo.doStuff() // causes NPE } thanks in advance!
aws lambda function with spring autowired dependencies
9 I don't think it is possible to do what you want to do with the current version (1.0.9.2) but here is what I do to work on two branches. Clone the two branches from the Git Shell git clone https://your-project/master/ master git clone https://your-project/gh-pages/ gh-pages In Git for Windows, drag & drop the folder you want to work on. To switch branch, drag & drop the folder for the other branch. Share Improve this answer Follow edited Jun 14, 2012 at 13:15 answered Jun 14, 2012 at 3:47 SylvainSylvain 19.1k2626 gold badges9898 silver badges145145 bronze badges 4 That's what I'm doing right now. Note that you don't need to"stop tracking this repo", and that the repo tooltip is very useful. – SLaks Jun 14, 2012 at 3:50 I've updated my answer and removed the "stop tracking this repo" part. – Sylvain Jun 14, 2012 at 13:18 This works really well, thanks! If any future readers need help cloning multiple branches of a single project, I used this: stackoverflow.com/questions/1911109/git-clone-a-specific-branch – Pieter Müller Apr 2, 2013 at 6:57 I used the same way to have two branches in two respective folders so that I can compare two branches. I use github for window. And recently I find a strange thing: I have 3 projects. For 2 projects, the Github for Window can show two folders for each project simultaneously, which is very convenient for me. But the 3rd project, it can show one, either for the first branch or the second branch. But how come they follow different rules? – user1914692 Mar 19, 2015 at 14:49 Add a comment  | 
I sometimes need to have two different branches of a GitHub repo on my local disk at the same time. (especially when dealing with gh-pages) I usually do this by making multiple clones of the repo in different folder, with clone using a different branch. Is it possible to do this in the GitHub for Windows UI? (as opposed to switching a single local clone from one branch to another) It looks like the only way to switch between local clones is to drag the new clone into the UI every time.
Cloning Multiple Branches in GitHub for Windows
Try go to your scm site "https://{site name}.scm.azurewebsites.net"Debug Console -> Navigate to D:\home\site\deploymentsEdit settings.xml and change to your desire branch.<?xml version="1.0" encoding="utf-8"?> <settings> <deployment> <add key="branch" value="master" /> </deployment> </settings>
I have a webapp on Azure that is connected to github to do continuous deployment. Is there any quick way to change the branch it is connected to or I have to disconnect and then connect again to select the new branch?The problem is that Azure has a bug somewhere that when I disconnect I can only connect again without an error, after 15 or 30 minutes of disconnecting...
Azure: quickly change branch used on continuous deployment (github)
It was a simple solution. I had to update my ruby to the most current version. I was running ruby 2.3.3. I updated to ruby 2.5.3. This resolved the issue for me.ShareFollowansweredNov 4, 2018 at 20:24Edwin Jovany HerreraEdwin Jovany Herrera1311 bronze badgeAdd a comment|
I am using the geocoder gem on my rails web application to get the latitude and longitude of addresses. Every time I create a new Employee I want to get the latitude and longitude from the address inserted. Every time I attempt to great an employee or update I get the following error.SSL_connect returned=1 errno=0 state=error: certificate verify failedI have read multiple links and they all say to run the command "gem update --system", but this command has not helped correct the issue.How do I resolve this?
Geocoder gives certificate verify failed error
It was far more simple than I thought. I just added that to the .i file %typemap(freearg) uint8_t * { //cout << "Freeing uint8_t*!!! " << endl; if ($1) delete[]($1); } Seems to work. Edit: switched free with delete[]
I am trying to fix a memory leak in a Python wrapper for a C++ dll. The problem is when assigning a byte buffer to a helper object that has been created in Python: struct ByteBuffer { int length; uint8_t * dataBuf; }; I want to supply the dataBuf as a Python array, so the typemap that I came up with (and works) is that: %module(directors="1") mymodule %typemap(in) uint8_t * (uint8_t *temp){ int length = PySequence_Length($input); temp = new uint8_t[length]; // memory allocated here. How to free? for(int i=0; i<length; i++) { PyObject *o = PySequence_GetItem($input,i); if (PyNumber_Check(o)) { temp[i] = (uint8_t) PyLong_AsLong(o); //cout << (int)temp[i] << endl; } else { PyErr_SetString(PyExc_ValueError,"Sequence elements must be uint8_t"); return NULL; } } $1 = temp; } The problem is that the typemap allocates memory for a new C array each time and this memory is not freed within the dll. In other words, the dll expects the user to manage the memory of the dataBuf of the ByteBuffer. For example, when creating 10000 such objects sequentially in Python and then deleting them, it the memory usage rises steadily (leak): for i in range(10000): byteBuffer = mymodule.ByteBuffer() byteBuffer.length = 10000 byteBuffer.dataBuf = [0]*10000 # ... use byteBuffer del byteBuffer Is there a way to delete the allocated dataBuf from python? Thank you for your patience! Edit: I don't post the whole working code to keep it short. If required, I'll do it. Additionally, I am using Python 3.5 x64 and SWIG ver 3.0.7
Memory deallocation from SWIG typemap
Use Docker's multi-stage builds. This mechanism allows you to drop intermediate artifacts and therefore achieve a lightweight image. Example: FROM alpine:latest as build # copy large file # build FROM alpine:latest as output # copy necessary files built in the previous stage COPY --from=build app /app Anything built in the build stage will not be included in the final image, unless you explicitly COPY them. Docs: https://docs.docker.com/develop/develop-images/multistage-build/
I have a big tar/executable (over 30GB) I COPY/ADD it but this is used only for the installation. Once the application is installed I don't need it anymore. How can I do? I am trying to use it but: Everytime I run a build, it takes minutes to define the build context. I'd like to share this image, if I create a tar with docker save, Is the final version or each layer included in it? I found some solutions that said I can use RUN wget tar ... && rm tar but I don't want to create webserver for that. Why isn't possible to mount a volume during build process?! It would be very useful.
How to use big file only to build the container without adding it?
It depends on what you want - there's no simple answer for what you're looking for. The options I see are: Everything in one container: You copy all of the WAR files/directories into the container when you build your image and then start that. That means that you only have to worry about a single command to start/stop/restart things, but it also means that all applications share the same lifecycle and the same resources (CPU, JVM heap, thread pools, etc.). The alternative is to run something like Docker Compose, where you have a separate image per application, and start a separate container. Using Docker Compose, you can link them together, so they can see each other, and you can start/stop/restart/delete/recreate them as needed. You could even go further and start multiple instances of the same app if required - this will not work with the other approach. As you can see, each approach has advantages and disadvantages, it really depends on what you want. The second approach with a single image/container per app is more flexible, but requires more configuration, and has a bit more overhead in that you would be running 6 Tomcat instances instead of just one. For what it's worth, I use the second approach for my use case. I have a base Tomcat image that is shared across all apps, it's based on the official Docker Tomcat image, and adds a couple of common things. Then each app has its own image with additional configuration, and then I use Docker Compose to pull it all together. If you don't care about the added flexibility and all of your apps will always have the same lifecycle, then the first approach might also work - it all depends and what you're trying to do.
I have a java application which currently runs inside of a tomcat instance. This application has like 3-6 (depending on the use) webapps. I would like to pack this application into a docker container to setup a new test system infrastructure. What would be the best strategy to do that? Pack every webapp into a tomcat based container and bind it with docker-compose (I would prefer that but I'm not sure if that is even possible?) or have one tomcat based container with all webapps in it. Does anyone have experience with it?
Docker: Dockerize Tomcat application - Best practise
In order to view your project files as static webpage, you should store your files not in defaultmasterbranch but ingh-pagesbranch. You can create this branch using multiple methods but in order to find out the convenient one, you can use thisGitHub Pageslink.Basically, let's assume that you already havemasterbranch. If you are usinggitcommand line tool, you can do that with these steps:cd your-project-foldergit checkout -b gh-pages(it will create new branch and switch to it)git push origin gh-pages(it will create new branch on GitHub repo and push the existing files to it)
I am trying to see a static html file. My github url isanuragasaurus.github.ioand my repo name isjs-playground, it contains aindex.htmlfile. I am trying to openanuragasaurus.github.io/js-playground/index.htmlbut it's showing 404.Can anybody tell me how can I accessindex.htmlfile in myjs-playgroundrepo.
index.html in github repo not opening
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.Using-vto expose your current directory is the only way to make that.deploy/mup.jsfile inside your container, unless you are baking it into the image itself using aCOPYdirective in yourDockerfile.Using the-voption to map a host directory might look something like this:docker run \ -v $PWD/.deploy:/data/.deploy \ -w /data \ docker-mup deploy --config .deploy/mup.jsThis would map (using-v ...) the$PWD/.deploydirectory onto/data/.deployin your container, set the current working directory to/data(using-w ...), and then rundeploy --config .deploy/mup.js.
I would like to run this command:docker run docker-mup deploy --config .deploy/mup.jswheredocker-mupis the name the image, anddeploy,--config,.deploy/mup.jsare argumentsMy question: how to mount a volume such that.deploy/mup.jsis understood as the relative path on the host from where thedocker runcommand is run?I tried different things withVOLUMEbut it seems that VOLUME does the contrary: it exposes a container directory to the host.I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Docker how to pass a relative path as an argument
The important thing to realize about git diff A B is that it only ever shows you the difference between the states of the tree between exactly two points in the commit graph - it doesn't care about the history. The .. and ... notations used for git diff have the following meanings: So when you run git diff master feature that's not just showing you the change introduced by the commit you've marked as 2 - the output should show the exact differences between the state of the tree committed in master and the state of the tree committed in feature. If it's not showing you the earlier changes on your feature branch, perhaps you resolved conflicts from the earlier merges from master in favour of the version in master? As cebewee says, it may be that what you want is git log -p master..feature, since ..0 does care about history. The meaning of ..1 and ..2 for ..3 are different since they select a range of commits: Incidentally, it's often said that merging from ..4 into a topic branch is the wrong thing to do - instead you should be rebasing, or merging your topic branch into ..5 after it is complete. This keeps the meaning of the topic branch easily understood. The Git maintainer did a (somewhat difficult to understand) blog post about the philosophy of merging which discusses that.
I want to see the difference between the master branch and my feature branch. I have many pulls from the master to my feature branch and want to see the changes that would be added if I merged my feature into the master. This is my situation: -*--*--*-----*<master> \ \ \ 1--*--*--*--2--*<feature> My problem is git diff master feature seems to only display commit number 2. How can I see the diff that a GitHub pull request would show, which I believe is all the way to commit 1? I noticed git cherry shows me the commits I want to see the difference for.
'git diff' doesn't show enough
Update: this should no longer be necessary becauseAuthorize.net has updated its production servers' certificates.You may have found this to stop working all of a sudden because the Ubuntu ca-certificates package just dropped support for them in the most recent update:http://changelogs.ubuntu.com/changelogs/pool/main/c/ca-certificates/ca-certificates_20141019ubuntu0.12.04.1/changeloghttp://changelogs.ubuntu.com/changelogs/pool/main/c/ca-certificates/ca-certificates_20141019ubuntu0.14.04.1/changelogMy coworkers and I encountered this with a client just the other day--their donations suddenly stopped working.The real solution is that Authorize.net needs to update their certificate. However, in the meantime, you can just add the one missing certificate. I put together notes on how to do this in Ubuntu here:https://aghstrategies.com/content/SSL3_GET_SERVER_CERTIFICATEI also stashed the one root certificate (insecure though it may be) athttps://github.com/agh1/ca-certificate-for-authorize.netAgain, my hope is that this only needs to be a short-term solution until they get a new certificate, but this will be a good stop-gap.
The URL for transactions with authorize.net ishttps://secure.authorize.net/gateway/transact.dll. If we visit this URL and inspect the certificate, we can see that it is signed by the intermediary certificate with CN = Entrust Certification Authority - L1E , valid to 10 décembre 2019 17:25:43. However, if you visit the Entrust sitehttps://validev.entrust.net/, you see that their intermediary cert with the same CN is valid until 11 novembre 2021 23:00:59 - so it is a more recent version. These two intermediary certificates do not share the same root certificate. In my case, a problem occured because the well known listhttp://curl.haxx.se/ca/cacert.pemused by CURL in my configuration setting did not contain the root certificate for the previous version of the certificate. It contained only the root certificate for the new version. When I added the root certificate for the old version manually in the file, the problem was solved. However, I want to understand what exactly went wrong. Should have the list contained the root certificates for both versions? Should have Authorize.net updated its certificate so that it matches with the more up to date CA bundle?
How comes authorize.net uses a certificate that is signed with a CA that is not in the well known curl.haxx.se/ca/cacert.pem list?
A little late on this answer, but I encountered this issue myself and finally tracked down that the revert behaves weirdly in that the original commit being reverted is still in the history, so when you go to create a new PR it still thinks it is in there so you see no difference when doing the diff.ThisStackOverflow answers gives some more details about it.
I have reverted a pull request from GitHub by following this articlehttps://help.github.com/articles/reverting-a-pull-request/. Now even after reverting when I am comparing the two branch it shows same. How can I raise a pull request again?Here is what I didI raised a pull request fromprod_bug_fixbranch torelease/13.0.0and went to github and merged.Then I followed the above article and unmerged the pull request. Now I thought release/13.0.0 code would be back as before I raised the pull request.I tried raising a pull request again from prod_bug_fix to release/13.0.0 but it says "There isn’t anything to compare." . But I can see there are code differences between the two branches.What I did wrong and how I can make release/13.0.0 to same state as before?
Reverted a pull request from github but both branch shows no difference
Can anyone please help me how to deploy this webapp to azure with these certificates?If you want to install these certificates under the CurrentUser. You couldupload the .pfx to the Azure WebAppfrom Azure portal and add an app setting calledWEBSITE_LOAD_CERTIFICATESAdd an app setting called WEBSITE_LOAD_CERTIFICATES and set its value to the thumbprint of the certificate. To make multiple certificates accessible, use comma-separated thumbprint values. To make all certificates accessible, set the value to *.
I have c# asp.net 4.5 identityserver v3 web application. I am using this as my authorization server. I am using the default sample signing certificates as mentioned inDefault Signing CertificatesNow i want to deploy this in to azure. I am a newbie to this azure hosting.Can anyone please help me how to deploy this webapp to azure with these certificates?I tried with the following code. The thumbprint is the certificate i uploaded to azure website.public X509Certificate2 LoadCertificate(string filename, string password) { X509Certificate2 cert = null; X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certStore.Open(OpenFlags.ReadOnly); X509Certificate2Collection certCollection = certStore.Certificates.Find( X509FindType.FindByThumbprint, "6B7ACC520305BFDB4F7252DAEB2177CCd091FAAE1", false); if (certCollection.Count > 0) { cert = certCollection[0]; } if(cert == null) { var path = $@"{AppDomain.CurrentDomain.BaseDirectory}{filename}"; cert = new X509Certificate2(path, password); } return cert; }Thanks
How to deploy asp.net 4.5 identity server application to azure with signing certificates
-2Edit the file/usr/local/lib/python2.7/dist-packages/oslo_vmware/service.pyat line 141.comment out the line sayingself.verify = cacert if cacert else not insecureand add one extra lineself.verify = Falsei.e#self.verify = cacert if cacert else not insecure self.verify = Falseand re run the n-cpu again.Or execute./unstack.shand./stack.shfor fresh setup
I am trying to install openstack and ovsVapp in my server. Everything goes well during initial stage. Later got an error in n-cpu saying SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed.And one more error saying in n-cpu/usr/local/bin/nova-compute --config-file /etc/nova/nova.confNo handlers could be found for logger "oslo_config.cfg"
SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
One of the answers to this question solved my problem. Here it is with a few edits: I found a solution Open Windows Network Connections Right click on VirtualBox Host only adapter that was created Choose properties Check "VirtualBox NDIS6 Bridged Networking driver" Disable and Enable the highlighted item For me "VirtualBox NDIS6 Bridged Networking Driver" was not checked. I checked it and clicked OK to close the Properties window. After that, the Docker Quickstart Terminal was able to start the VM successfully.
I've tried several times to start the Docker VM via the Docker Quickstart Terminal. After deleting the default virtual machine in VirtualBox I receive the following output Creating Machine default... Running pre-create checks... Creating machine... (default) OUT | Creating VirtualBox VM... (default) OUT | Creating SSH key... (default) OUT | Starting VirtualBox VM... Error creating machine: Error in driver during machine creation: exit status 1 Looks like something went wrong... Press any key to continue... To troubleshoot further, I attempted to start the default machine in the VirtualBox GUI directly using Start > Headless Start, as suggested in other Docker issues. The startup failed and I received an error dialog box with the content: Failed to open/create the internal network 'HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter' (VERR_INTNET_FLT_IF_NOT_FOUND). Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND). Result Code: E_FAIL (0x80004005) Component: ConsoleWrap Interface: IConsole {872da645-4a9b-1727-bee2-5585105b9eed} Versions of related components: VirtualBox Version 5.0.11 r104393 Docker Toolbox 1.9.1a Windows 10 Version 1511 (OS Build 10586.14)
Docker Quickstart Terminal fails to start VirtualBox VM in Windows 10
This one regex may helppreg_replace("/testblog1\.php\?id=\d*&title=/", "$1", $input_lines);Sohttp://www.eyecatchers.co/testblog1.php?id=123&title=your-titlegoes ashttp://www.eyecatchers.co/your-title
I want to remove id from URL and rewrite it with domain followed by title name only for SEO-frendly URL.http://www.eyecatchers.co/testblog1.php?id=110&title=6-Benefits-of-Hiring-a-Digital-Marketing-Agency.This is my current URL and I want to rewrite it through.htaccessto following URLhttp://www.eyecatchers.co/6-Benefits-of-Hiring-a-Digital-Marketing-Agency.I have tried with<IfModule mod_rewrite.c> RewriteEngine On RewriteRule id/(.*)/(.*)/ testblog1.php?id=$1&title=$2 RewriteRule id/(.*)/(.*) testblog1.php?id=$1&title=$2But it has given mehttp://www.eyecatchers.co//id/110/6-Benefits-of-Hiring-a-Digital-Marketing-Agency.How can I get domain along with title name only?
How to remove id and title from url and rewrite it with domain followed by title name only?
I was in deep conversations with AWS SES support regarding this issue. This is the outcome:I also would like to update you that SES internal team were able to confirm a deliverability issue with the recipient ISP and are actively working towards a resolution but we do not have an exact ETA at this time. Due to the nature of the shared IP pool, these types of blocks can happen periodically and we make every effort to resolve these issues as fast as possible. To prevent impact from these types of issues, it is always recommended to use dedicated ips for higher volume sending.It means that the shared IP addresses used by AWS SES are blacklisted with GMX and WEB.de AWS SES wants to resolve this.In the meantime, they recommend to usededicated IP addressesto solve this issue. Please note that these IP addresses have to be"warmed up"in order to not cause trouble on the recipient end (e.g. spam folder issues). Unfortunately, my sending volume is not that high (yet) so I have my fingers crossed I can get those emails send out easily. Otherwise I have to find another solution or need to wait for AWS so solve the blacklist issue. I hope this helps anyone else.Edit January 2021I was able to send to GMX/WEB.de although my IP was only starting to warm up. Now after one month I am nearly at 100% with not many emails per day sendout volume.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be about programming within the scope defined in thehelp center.Closed3 years ago.Improve this questionI am using AWS SES to send out emails automatically through my application. I have configured the Identity management as following:DKIM is setup correctly. I have no issues sending emails from my domain except for GMX and WEB.de emails where I receive the following error:Action: failed Final-Recipient: rfc822;[email protected](mxweb111) Nemesis ESMTP Service not available 554-No SMTP service 554-Reject due to policy restrictionsLooking at further documentation, it seems that emails coming from my domain are classified as Spam by their servers.I have done research and found that I might need to configure Reverse-DNS but as it looks like, AWS SESdoes not support this?What else can I do to make my emails get through WEB.de and GMX servers? Thank you.
AWS SES 554-No SMTP Service for web.de and GMX email addresses [closed]
Okay! So the problem was, I think, related tothis bug. It seems that even though apparmor wasn't configured to prevent access to sockets inside the containers it was actually doing something to prevent reading from them (though not creation...) so turning off apparmor for the container (following these instructions) worked to fix it.The two relevant lines were:sudo apparmor_parser -R /etc/apparmor.d/usr.bin.lxc-startsudo ln -s /etc/apparmor.d/usr.bin.lxc-start /etc/apparmor.d/disabled/and addinglxc.aa_profile = unconfinedTo the containers config file.NB: These errors were not recorded in any apparmor logs.
I have a flask app running under uWSGI behind nginx.*1 readv() failed (13: Permission denied) while reading upstream, client: 10.0.3.1, server: , request: "GET /some/path/constants.js HTTP/1.1", upstream: "uwsgi://unix:/var/uwsgi.sock:", host: "dev.myhost.com"The permissions on the socket are okay (666, and set to the same user as nginx), in fact, even when I run nginx as root I still get this error.The flask app/uwsgi is sending the request properly. But it's just not being read by Nginx. This is on Ubuntu Utopic Unicorn.Any ideawherethe permission might be getting denied if the nginx process has full access to the socket?As a complicating factor this server is running in a container that has Ubuntu 14.04 installed in it. And this setup used to work... but I recently upgraded the host to 14.10... I can fully understand that this could be the cause of the problem. But before I downgrade the host or upgrade the container I want to understand why.When I run strace on a worker that's generating this error I see the call it's making is something like this:readv(14, 0x7fffb3d16a80, 1) = -1 EACCES (Permission denied)14seems to be the file descriptor created by this system callsocket(PF_LOCAL, SOCK_STREAM, 0) = 14So it can't read from a local socket that it has just created?
nginx permission denied while reading upstream - even when run as root
I don't know how to share UTS namespace between containers. But, if your final aim is just to sharehostnamebetween containers, next should work if you not care about network share:shubuntu1@shubuntu1:~$ docker run -idt --name container1 ubuntu:18.04 /bin/bash 3b024de861049c63852e5b196b8730c23ccba8454eb894aa1159c046dd35043e shubuntu1@shubuntu1:~$ docker run -idt --name container2 --net container:container1 ubuntu:18.04 /bin/bash ee3d696a589bb7aa3000cec7587f4b920088edf829f8cda029b019316451e92f shubuntu1@shubuntu1:~$ docker exec -it container1 hostname 3b024de86104 shubuntu1@shubuntu1:~$ docker exec -it container2 hostname 3b024de86104It letcontainer2use the same network ofcontainer1, refers tothis. FYI in case this help.Another way maybe just expose/etc/hostnametobind mountsorvolume, and let all containers which you needed to use the same one, then you no need to share net.
Docker run command by default uses a dedicated UTS namespace for the container and because of it the container gets its own/unique hostname. I am trying to share the UTS namespace between two containers but it seems that it is not possible with docker run command.Following are the commands that I ran -docker run -d --name container1 alpine sleep infinity docker run -it --name container2 --uts container:container1 alpine /bin/shError -docker: --uts: invalid UTS mode.Based on thedocumentation, it looks like it is not possible to reference a container with the --uts flag. This is not the case with other namespace related flags like --pid, --network, etc. They support referencing other containers. Why "container:<name|id>” mode is not supported by the --utc flag? How to share UTS namespace between containers so that they share hostname?
Docker run - how to share UTS namespace between containers?
0 Your code is just fine. RealityKit has a huge memory footprint that will be left there. There is a small room for improvement tho. First if your code is in UIKit, you should add the following: func removingView() { self.arView?.session.pause() // there's no session on macOS self.arView?.session.delegate = nil // there's no session on macOS self.arView?.scene.anchors.removeAll() self.arView?.removeFromSuperview() self.arView?.window?.resignKey() self.arView = nil } deinit { removingView() } Second, to reduce memory footprint you can set up the ARView with the following render options: arView.renderOptions = [ .disableHDR, .disableDepthOfField, .disableMotionBlur, .disableFaceMesh, .disablePersonOcclusion, .disableCameraGrain, .disableAREnvironmentLighting ] And that's about it. There's an ongoing thread on the Apple forum. You can also go there and give a vote to push Apple engineers to fix this bug, Share Improve this answer Follow answered Sep 14, 2022 at 19:57 XcodeNOOBXcodeNOOB 2,15544 gold badges2424 silver badges2929 bronze badges Add a comment  | 
I am using RealityKit to generate mesh after a particular time interval and adding it as child to the root node. Before creating another mesh I am removing the previously created Entity from parent node also assigning plainCard = nil to it. Below is the sample code: import RealityKit import Combine import SceneKit private var sceneEventsUpdateSubscription: Cancellable! class ViewController: UIViewController { @IBOutlet var arView: ARView! let anchor = AnchorEntity(world: [0,0,0]) let rootEntity:Entity = Entity() var plainCard:ModelEntity? = nil var pivot = SCNMatrix4MakeTranslation(0.069, 0.155, 0) var timer: Timer? = nil override func viewDidLoad() { super.viewDidLoad() timer = Timer.scheduledTimer(timeInterval: 0.02, target: self, selector: #selector(addARElement), userInfo: nil, repeats: true) arView.scene.addAnchor(anchor) anchor.addChild(rootEntity) self.rootEntity.position = [0,0,-0.5] } @objc func addARElement() { plainCard?.removeFromParent() plainCard = nil plainCard = ModelEntity(mesh: MeshResource.generateBox(width: 0.2, height: 0.11,depth: 0), materials: [UnlitMaterial(color: .red)]) plainCard?.transform = Transform(matrix: simd_float4x4(pivot)) rootEntity.addChild(plainCard!) } } Here is my question: With this creation of box at particular interval, memory continues to increase, there is significant increase in CPU usage, energy impact is high. After one point the app crashes because of excessive memory usage. What can be going wrong here? Where is the memory leak happening? After 5 mins of running app: the frame dropped to 30, memory has gone up from previous image and energy impact is very high I tried Xcode's instruments which shows Heap allocation is getting significant spike
Possible memory leak while creating mesh
You can use RAII type instead or avoid allocation: static char cArr[10]; static auto cArr2 = std::make_unique<char[]>(10);
I have little question about c++ how can I destruct this code without memory leak? void classA::funcA() { static char* cArr = new char[10]; } just don't write like this style?
c++ destruct staic variable in function with memory allocation
12 Looks like the original push-to-deploy feature is now deprecated, but you can use Google Cloud Platform's Build Trigger to do this: Navigate to Google Cloud Platform > Container Registry > Build Triggers and set up the branch(es) you want to auto build from your connected github repository. Make sure you've added a build definition to your repository. Here you can find the full specification, but here's an example of the bare minimum to do a gcloud deploy via cloudbuild.yaml: steps: - name: 'gcr.io/cloud-builders/gcloud' args: ['app', 'deploy'] Share Improve this answer Follow edited Nov 15, 2020 at 3:37 artu-hnrq 1,40511 gold badge1414 silver badges3333 bronze badges answered Sep 30, 2017 at 3:10 JustinJustin 50088 silver badges1818 bronze badges 3 4 Using Google Could Build via its GitHub App to deploy to App Engine worked really well for me, but there are more steps you need to do than are outlined here or in the link above. I wrote this summary of what you need to do leighmcculloch.com/posts/…. – Leigh McCulloch Jan 6, 2019 at 3:01 Doesn't work anymore unfortunately @LeighMcCulloch :( Adding that IAM member returns the error: "Email addresses and domains must be associated with an active Google Account, G Suite account, or Cloud Identity account." – NaturalBornCamper Apr 9, 2021 at 1:13 @NaturalBornCamper that error usually indicates the email you're using is invalid, such as has a typo or isn't connected to any account. This setup still works for me. – Leigh McCulloch Apr 10, 2021 at 2:57 Add a comment  | 
I'm learning how to use Google App Engine and I can deploy fine via terminal but I want to allow people to contribute to my github repo and anything they publish will update my app. Here is my repo: https://github.com/rajtastic/roshanissuperveryawesome I've sync'd my repo to app engine and I can see the contents in my Cloud instance My question is: How do I deploy a new version of my app whenever I commit to my repo? Does anyone know if this is possible?
Deploy to Google App Engine via a GitHub Repo
The metricjvm_memory_max_bytesshows:The maximum amount of memory in bytes that can be used for memory management.So the value will not change according to its consumption but rather on how much memory is available.If you are trying to get how much memory has been used, you need to use the metric:jvm_memory_used_bytes.You can find more information onthis page, under3. JVM Metrics.
I have micrometer-prometheus jvm metrics monitoring configured for my spring boot application, which is deployed in kubernetes pods. There are 2 pods.When I run queryavg(jvm_memory_max_bytes), I see graph hovering mostly around 400mb value. When I runsum(jvm_memory_max_bytes), graph jumps up to 10gb value.Is this much variation normal?
Promotheus: discrepancy in sum vs avg
It's been solved, I addedHttpRuntime targetFramework="4.5undersystem.webtagand it worked.I figure it out when looking into Event Viewer.ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredNov 17, 2014 at 7:06chenchen15111 gold badge11 silver badge1111 bronze badges2unmatched quotes. and please explain why this work? what do "httpruntimetargetframework" mean? what do you find in event viewer?–把友情留在无盐Mar 29, 2015 at 0:31Because the project was compiled in .NET 4.5.1 and I got a message related to that.–chenMar 30, 2015 at 9:30Add a comment|
I'm trying to use Websocket over HTTPS. Or just even get a Websocket handshake and connection without HTTPS with no success.I'm using Windows 8.1 Pro and installed the Websocket protocol. I've created a web application in IIS with a self signed certificate.When I'm running my website and invoking the connect websocket function I'm getting an 404 error in chrome dev tools under Network Tab-> Websocket.I wrote the fqdn in the link to connect such as ws://fqdc/page.ashx I also tried: ws://localhost/page.ashxCan anyone suggest some idea on what I'm doing wrong?
WebSocket over HTTPS
I'm pretty certain it just means that a task takes two clocks in unit 0 the second time through. The fact that it takes seven clocks in total alludes to this, 1 in unit0, 1 in unit1, 1 in unit2, 1 in unit3, 2 more in unit0 and finally 1 in unit4.It may well just be a contrived example so that there was a conflict when shifting by one clock (the author had to dosomethingto ensure that task 2 would catch up to task 1 and that seems the easiest solution) or unit0 may well be a non-linear processor of some sort.Another example would have been trying to pump in a task at the point where the previous task was re-entering unit0.What they're trying to show is that, given a maximum duration within a unit ofNcycles in a pipeline, you have to limit your injections of work to one everyNcycles to be sure of no conflict.My bet (based on the small number of authors I know) would be on the author doing the minimal amount of work to describe the problem :-)
In the image below, why does task X, appear two times for unit 0 at clock cycles 4 and 5?have to make a program for the arrangement of the pipeline, but I need to know why the above happens to complete it.Is it just because the author wants it to repeat??
Why does task X, appear two times for unit 0 at clock cycles 4 and 5?
Could you try this;#!/bin/sh FILE=/home/username/public_html/backup_dir/my_db.sql.$(date +"%Y%m%d") DATABASE=db_name USER=db_username PASS=db_password unalias rm 2> /dev/null rm ${FILE} 2> /dev/null rm ${FILE}.gz 2> /dev/null mysqldump --opt --user=${USER} --password=${PASS} ${DATABASE} > ${FILE} gzip $FILE echo "${FILE}.gz was created:"
I have a shell script which is (stored in/home/username/public_html/backup_dir/db_backup.sh) used to take a database backup.When I'm running the shell script in myshared hostingthrough putty by going to the directory/home/username/public_html/backup_dir/and then running the script through commandsh db_backup.shit is creating the zip file inbackup_dirdirectory (good); but the same script when I'm running throught crontab it is createing the zip file in root (i.e./) directory(issue).I want the cron to creating the zip file in/home/username/public_html/backup_dir/where the shell script is.I know I have to set the store path some where, but I don't know where to write it.Shell script (db_backup.sh):#!/bin/sh FILE=my_db.sql.`date +"%Y%m%d"` DATABASE=db_name USER=db_username PASS=db_password unalias rm 2> /dev/null rm ${FILE} 2> /dev/null rm ${FILE}.gz 2> /dev/null mysqldump --opt --user=${USER} --password=${PASS} ${DATABASE} > ${FILE} gzip $FILE echo "${FILE}.gz was created:"crontab command:0 */6 * * * sh /home/username/public_html/backup_dir/db_backup.shAny help/suggestion will help allot.Thanks.
how to set path for cron backup zip file
This is a typical nginx configuration for PHP-FPM.server { root /app/html; location / { try_files $uri $uri/ /api/index.php$is_args$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } }Notice the differences from your example:Removed the unnecessary@missinglocation block.Removed thetry_filesstatement from the.phplocation block.Moved therootdeclaration to the server block. If you need to have different roots, please specify this in your question.The onlytry_filesstatement includes the full path for yourapi/index.php.If a request comes for a non-existing path, it will be handled by your/app/html/api/index.phpscript, acting as a global entry-point.
InapacheI have an.htaccessthat will rewrite fromhttp://server/api/any/path/i/wanttohttp://server/api/index.phpif no file or folder is found.Options -MultiViews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond $0#%{REQUEST_URI} ([^#]*)#(.*)\1$ RewriteRule ^.*$ %2index.php [L,NC,QSA] </IfModule>I'm moving todockerand will usenginxinstead and I want to re-write therewrite.Important to note is that usingapacheand.htaccess$_SERVER['REQUEST_URI']is/api/any/path/i/wantand not the re-written url (index.php....).I'm not that well versed withnginxbut from posts on SO I've figured some things out.Relevant section ofsite.conflocation / { root /app/html; try_files $uri $uri/ index.html /index.php?$args; } location ~ \.php$ { try_files $uri @missing; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location @missing { rewrite ^ $scheme://$host/api/index.php permanent; }The above config will unfortunately only redirect toindex.phpand is as far I've managed to get.How can I do the same thing innginx?
Catch-all should pass path to index.php in nginx
When you use Firebase Hosting on top of Cloud Functions for Firebase, Hosting can act as an edge-cached layer on top of the responses from your HTTPS functions. You can read about that integration in the documentation. In particular, read the section managing cache behavior: The main tool you'll use to manage cache is the Cache-Control header. By setting it, you can communicate both to the browser and the CDN how long your content should be cached. In your function, you set Cache-Control like so: res.set('Cache-Control', 'public, max-age=300, s-maxage=600');
Let's say I have a database of 100,000 pieces of content inside of firestore. Each piece of content is unlikely to change more than once per month. My single page app, using firebase hosting, uses a function to retrieve the content from firestore, render it to HTML, and return it to the browser. It's a waste of my firestore quotas and starts to add up to a lot of money if I'm routinely going through this process for content that is not that dynamic. How can that piece of content be saved as a static .com/path/path/contentpage.html file to be served whenever that exact path and query are requested, rather than going through the firestore / functions process every time? My goal is to improve speed, and reduce unnecessary firestore requests, knowing each read costs money. Thanks!
Can Firebase Hosting Serve Cached Data from Cloud Functions?
You can use a before_filter in your controller and then redirect appropriately. For example:class UserSessionsController < ApplicationController before_filter :ensure_proper_subdomain, :only => "new" def ensure_proper_subdomain if request.host_with_port != 'admin.porkjerkyicedcream' redirect_to params.merge({host: 'admin.porkjerkyicedcream'}) end end end
I have a couple of sub-domains on my rails app, and the main domain too.Lets say I have a login route like this:match "login", :controller => "user_sessions", :action => "new"Now this route can be accessed on all domains and sub-domains, e.g. :porkjerkyicedcream.com/loginand...admin.porkjerkyicedcream.com/loginMy question is how do I force a redirect to remove the subdomain (or add it). So if someone visits/loginonadmin.porkjerkyicedcream.com/loginthey are redirected to the main domain (or vice versa)?Cheers!Edit:I don't necessarily need the solution in the routing.I want to avoid specifying and domain name in the app itself so It can be run it lots of places on lots of different domains (like a different dev domain)
Ruby on Rails routing, how do you force a subdomain / domain changed
Usually when they talk about Etags varying across servers it's in relation to static connect served up by Apache. By default Apache includes the file's inode in the Etag. If the files are not on a shared resource (like a NFS exported NAS), then the file's inode would be different on each server. Typically, the recommendation is to configure Apache like: FileETag MTime Size but even that has the possibility of differences if the modification time varies across the servers. However, for non-static content, you are generating the Etag in your code, so it would be the same across multiple servers.
I gathered from the much famed scaling rails screencasts that at some point when your site gets big and bigger, proxy caching is the way to go. Proxy caching uses etag, among other things and since etags can be more specific and strong validator is perhaps the way to go. However, I also hear that in server farm scenarios the etag is not the right solution because it can vary across servers (How?) This seems contradictory i.e. most likely one is implementing e-tag based proxy caching if they are running a large load balanced server farms. So if e-tag fails in this situation how do they do it? :last_modified isn't really a great option. In a rails app let's say if my etags in a post index action is :etag => "all_posts_#{Post.count}". will this vary from server to server if it's a load balanced server farm?
etags and server farm
I used native SQL, parsed timestamp to SQL timestamp then found number off seconds since 1970 using DATEDIFF ( datepart , startdate , enddate ). Added number of seconds since year 0. I lose millisecond part, but i guess this is the next best thing.
I am currently trying to read a large table from an compact ce database containing aprox 3-4 million rows. The database size i currently 832MB. Populating a list with the records is throwingOutOfMemoryExceptionThe mockup code:using (var con = new DomainContext()) { foreach (var item in con.logRecords) { if (item.Info != null && item.Info != "") item.Timestamp = DateTime.ParseExact(item.Info, "MM.dd.yyyy HH:mm:ss.fff", culture).Ticks; } con.SaveChanges(); }New approach, still not getting it to work....Task.Factory.StartNew(() => { using (var con = new DomainContext()) { for (int i = 0; i < 300; i++) { try { var temp = con.logRecords.Where(p => p.Id <= i * 10000 + 10000 && p.Id >= i * 10000); foreach (var item in temp) { if (item.Info != null && item.Info != "") item.Timestamp = DateTime.ParseExact(item.Info, "MM.dd.yyyy HH:mm:ss.fff", culture).Ticks; } con.SaveChanges(); } catch { } GC.Collect(); Console.WriteLine(i.ToString()); } } });
C# DbDataReader populating List result in OutOfMemoryException
1 This error is referenced with several scientific Python projects https://github.com/scikit-learn/scikit-learn/issues/7542 https://github.com/automl/auto-sklearn/issues/101 and is apparently related to multiple installations of NumPy, Cython or different C++ compilers. Now, you should make sure that the environment is clean on both sides: no packages in ~/.local, no setting for PYTHONPATH environment variable and only the system Python and system compiler, for instance. Then, also provide the full backtrace instead of the one error. Share Improve this answer Follow answered Jan 26, 2017 at 17:35 Pierre de BuylPierre de Buyl 7,19422 gold badges1717 silver badges2222 bronze badges 2 ~/.local had a couple of scipy packages under python 2.7. But after I remove them the code still works on my normal machine. Neither machine has PYTHONPATH set as an environment variable. I am using a virtual environment for both to avoid these kind of issues. – tombird Jan 26, 2017 at 17:58 Did you get any solution @tombird? – ImPurshu May 7, 2019 at 10:45 Add a comment  | 
I have cythonised a chunk of code, which I know works on my usual machine. However, when I transfer it and run it on another machine it is not working. My machine is running Ubuntu and the other machine is running Ubuntu within Docker. The error is: from myFile import myFunction ImportError: /myFile.so: undefined symbol: PyFPE_jbuf The Docker environment is set up with the exact same dependencies as on my local machine, so I can't understand why this is happening!
Importing .so file made with Cython results in ImportError: ... undefined symbol
docker infocommand is running insidejava:8based container which will not have docker installed/available in it.ShareFollowansweredAug 4, 2016 at 21:18VanagaSVanagaS3,41044 gold badges3030 silver badges4343 bronze badges2Thanks, it makes sense. Is it possible to have a stage with sbt and docker available at the same time?–Mon CalamariAug 4, 2016 at 21:58IMO, unless you always need docker latest image, install java and sbt under a container of docker:latest and commit a new image out of it. Use the image as the source point in your CI script. Then you can skip calling for another image (java:8) under build too.–VanagaSAug 5, 2016 at 3:05Add a comment|
I am having difficulties with enabling docker for build job. This is how gitlab ci config file looks like:image: docker:latest services: - docker:dind stages: - build build: image: java:8 stage: build script: - docker info - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/... - sbt server/docker:publishAnd here is the output from job:gitlab-ci-multi-runner 1.3.2 (0323456) Using Docker executor with image java:8 ... Pulling docker image docker:dind ... Starting service docker:dind ... Waiting for services to be up and running... Pulling docker image java:8 ... Running on runner-30dcea4b-project-1408237-concurrent-0 via runner-30dcea4b-machine-1470340415-c2bbfc45-digital-ocean-4gb... Cloning repository... Cloning into '/builds/.../...'... Checking out 9ba87ff0 as master... $ docker info /bin/bash: line 42: docker: command not found ERROR: Build failed: exit code 1Any clues why docker is not found?
Enable docker for gitlab ci community edition
First of all, not everybody uses Eclipse (even in the same team), and different IDEs could produce slightly different artifacts (JAR files). One advantage of Maven is that it does not only build projects: it prones convention over configuration, and suggests a "standard" layout for projects it allows gathering all the dependencies needed in your project, using specific versions (as opposed to having something like "lib/log4j.jar", no version is specified) it allows making lots of different artifacts: JARs containing compiled code or source code, javadoc, WARs, etc. it can be run outside of your IDE, very often on a continuous integration server I would recommend using Maven even for small projects :)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago. This sounds like a dumb question but i am really confused with this. I never used maven but i know that it is used to build projects. So, I have a question : why there is a need of building project using Maven when we can build project in Eclipse (without Maven). Just by exporting the eclipse project as JAR and include required Libraries and all. suppose, I download any project from github. Now I can import that project in eclipse and export it as JAR and use it functionality. So why all suggest to use MAVEN to build project and generate binaries and use it.
difference between building project using maven and eclipse export as Jar [closed]
Just add to your crontab* * * * * for i in {0..59}; do curl http://your.domain.zone/page.html && sleep 1; done;foradded because cron could not run faster than once per minute.
i have a script which i am running from browser with meta refresh and it workd without any issue in browser but it will not work in cron so what i can do to run every second from cron? i know with sleep i can but i have to create several cron tab in cron job and every time i have to run the scriptwith sleep how can i run this script every 5 sec.<meta http-equiv="refresh" content="5;url=test.php"> <?php $res = mysql_query("SELECT * FROM tableA where st='0' order by id asc LIMIT 1"); $row = mysql_fetch_array($res); $link= $row['wl']; function getTitle($Url){ $str = file_get_contents($Url); if(strlen($str)>0){ preg_match("/\<\/td\><\/tr\><tr\><td colspan\=2\>(.*)\<\/td\>/",$str,$title); return $title[1]; } } getTitle($link); ?>
running a script from cron every second
Edit: It works after 48 hours with prefixsigned_. Just that AWS takes a bit more time is all.
I want to delete the previous versions of files in an S3 bucket that are not in a folder but directly uploaded in a bucket and also with a specific prefix. Eg. Some S3 keys are like:signed_2020_04_15.pdfsigned_2020_04_17.pdfunsigned_2020_04_15.pdfunsigned_2020_04_17.pdfinfo/signed_2020_04_16.pdfinfo/unsigned_2020_04_16.pdfSo I want my lifecycle to delete only the previous versions of the files starting withsigned_butnotthe ones in the folderinfo. That means in the above list onlysigned_2020_04_15.pdfandsigned_2020_04_17.pdfmust be deleted.How do I put my prefix? I tried prefix assigned_and waited for the lifecycle policy to run but it doesn't work. But in another bucket, the prefix was likefolder/and it works.So, do lifecycle policies work only for the files that are in a folder and not the ones that are uploaded directly?
AWS S3 Lifecycle Expiration Prefix Rule
1 There are multiple ways to achieve this, depending on your needs: method 1 Use _find by making a POST request to /db/_find and select the fields you want curl -X POST -d '{"fields": ["name", "family", "dob", "phone", "address", "SID"]}' http://IP:5984/mydb/_find The parameter -d is used to send the data to the POST request. You may need to escape the quotes if you're running Windows. method 2 Use a view function method 3 Process the results with a simple node program const http = require("http"); http.get({ host: 'IP', port: 5984 path: '/mydb/158' }, function(response) { var body = ''; response.on('data', function(d) { body += d; }); response.on('end', function() { var parsed = JSON.parse(body); var result = {}; for (var key in parsed) { if (key != "_id" && key != "_rev") { result[key] = parsed[key]; } } console.log(result); }); } ); The above code issues a GET request to your couchdb server, parses the JSON output and puts the results in a new object after ignoring the _id and _rev keys. method 4 Process the output as a string. As you correctly pointed out, this is not a good solution. It's ugly, but it doesn't mean it can't be done. You could even pipe the output through sed/awk/perl and process the string there. Share Follow answered Oct 25, 2018 at 7:43 mihaimihai 37.6k99 gold badges6161 silver badges8888 bronze badges 1 Sorry I could not get back. I tried method 1 which didn't work. I ended up fixing some other issues and did not get a chance to do this. I will try either method 1 or 2 soon and let you know. – Omi Nov 8, 2018 at 6:29 Add a comment  | 
To begin with, I am new to couchdb and new to databases in general.I have a couchdb instance setup in a docker container. I have another docker container in the same box that has some nodeJS code that is talking to this couchdb instance. Now, I am doing basic stuff like adding an entry and getting an entry from the db. To get an entry, this is what I do: curl -X GET http://IP:5984/mydb/158 I get an output as follows: {"_id":"156", "_rev":"1-47c19c00bee6417f36c7db2bc8607468", "name":{"given":["Itisha"], "family":["Iyengar"]}, "dob":"1981-06-01", "phone":{"value":"tel:312-116-1123"}, "address":{"line["147leverettmailcenter"], "city":"Naperville", "state":"IL", "postalCode":"02770"}, "SID":""} I pass the data to another function that processes it further. However, I only want the actual data and don't want fields like _id and _rev.How do I do that? - I read somewhere that I can log into the couchdb instance by doing http://localhost:5984/ from the machine where it is installed. Here I can edit the get script to make it return just the data and ignore the _id and _rev fields. However, I am running it from a docker container on Ubuntu. I do not have access to a UI through which I can make such changes. Is there an alternate way to do this? If not, is there a way to parse the output and filter out the _id and _rev fields? As of now, I am doing this in a crude way by doing String.splice() and filtering out the data (after the _id and _rev fields) till the end. But I don't think this is a good way to do this and definitely not a good idea for actual production code. Please suggest. Thanks in advance.
couchdb in docker container: how to remove id from output
If you know the username you can go tohttps://gist.github.com/username/and then search through them, but that only works if it's not an anonymously posted or private gist. There's not a nice way to get to a Gist unless you've got the link if you don't know who posted it.In your case, the Gist is available as the first one at the moment underhttps://gist.github.com/dhh.
Is there a way to find a Gist from the name (or description)?I was watching a YouTube video discussion and one of the participants brought up a Gist. It was too small to read on the video, but the name at the top was clear (dhh/test_induced_design_damage.rb); however, I wasn't able to use that name to find the Gist. (Eventually I found a raw link on a Twitter feed, with a 20-digit hex number. The Gist is public.) I later tried several different searches to see if there was a way I could find it by name, and I tried looking in Github's Help, but I couldn't find a way. Did I miss something, or is there just no way to do this?
how to search for gist by name
Git is an awesome tool, here are some tips that should help you. This will list all branches that exist. Any that are prefixed with origin/ are on the server and you will need to fetch them. git branch -a Run the following to get a remote branch git checkout BRANCH_NAME git pull origin BRANCH_NAME Checkout is what allows you to swap between branches. You can even checkout commits and enter detached head mode, but that is a more complex topic. When you are done with the work in one branch, you should merge your code back into your master branch or dev branch or whatever you happen to use. Once you have pulled down a branch and have checked it out, your local git repo will contain all the files for that branch. If you checkout another branch, the code will be replaced by the code of the other branch ect...
I'm new to Github. I just cloned a repository, and I believe that when I clone it, all of its branches are copied as well. All I want to do is switch from one branch to another, in the order of the branches. I mainly just want to be able to run my code at each branch, and then switch to the next branch soon after. Basically what I'm asking is, how do I open all of the files associated with each branch so that I can run the code?
Github switch from one branch to another after cloning repository
In case of docker volumes, you don't have control over where docker saves it's volumes. all you can do is just to change docker root directory. so it's better to mount your new partition under a directory and then change docker root directory to this mount point. this way you can achieve what you want. also you should consider that by doing this, all of your docker data will be stored in this new partition.for changing your docker root directory, you should first create a file named daemon.json in address below:/etc/docker/daemon.jsonand then add config below to it:{ "data-root": "/path/to/new/directory" }then restart docker daemon:systemctl restart dockerthen you can run command below to check current docker root directory:docker info
I have a Docker container running on my PC. The main functionality of the container is to scrape data, and this accumulates 0.3GB/day. I'll only be needing this data for the last 30 days, and after this I plan to store it archived on Hard Disk Drives for historical purposes. However after few hours of trials and errors, I've failed to create a Docker Volume on another partition, and the_datafolder always appears in the/var/lib/docker/volumes/folder, while the partition drive is always empty.I also tried creating the volume withdocker run -v, but it still creates the volume in the main volumes folder.The operating system isPop!_OS 20.04 LTSI'll provide data about the partition:I'll provide data about the partition:
Docker Named Volume on another Partition on another hard drive
In order for the browser to resolve this custom name, you will need to add an alias to your/etc/hosts file. It probably already contains a line about 127.0.0.1, in which case you just add your alias to the list127.0.0.1 localhost localhost.localdomain myappnameYou can then change the server name in the app's config to make it explicitly use this name.app.config['SERVER_NAME'] = 'myappname:5000'Only privileged programs (run as root or with sudo) can bind to low ports such as 80, so you will still have to use a high port number.
Currently, my flask app runs locally on:http://localhost:5000/some_pageHow could I create a local custom location for my app like:http://myappname/some_pageSort of like a local domain name. Is this possible at all? Any pointers would be great.
Creating a local custom host name instead of localhost?
From this issue on GitHub. Ok, thanks for the config. That all looks fine so my guess is you're getting a process crash due to a bad extension. Since you're in production, I'd suggest uncommenting the workers line and using at least 2 workers. That will at least shield you from the crashes a little because the other worker will be able to handle traffic while the crashed one is automatically restarted.
I have a puma server running a ruby on rails app on an AWS EC2 instance. It was working fine for a while, but I found it responding with 502 errors a few hours later. The app is deployed with capistrano. A simple restart of puma fixed the problem temporarily, but I want to prevent it happening again. Not quite sure what to try first. Here's my capistrano puma config: set :puma_rackup, -> { File.join(current_path, 'config.ru') } set :puma_state, "#{shared_path}/tmp/pids/puma.state" set :puma_pid, "#{shared_path}/tmp/pids/puma.pid" set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock" set :puma_conf, "#{shared_path}/puma.rb" set :puma_access_log, "#{shared_path}/log/puma.error.log" set :puma_error_log, "#{shared_path}/log/puma.access.log" set :puma_role, :app set :puma_env, fetch(:rack_env, fetch(:rails_env, 'production')) set :puma_threads, [0, 8] set :puma_workers, 0 set :puma_worker_timeout, nil set :puma_init_active_record, true set :puma_preload_app, false set :bundle_gemfile, -> { release_path.join('Gemfile') } Puma error log doesn't show any crashes. Nginx error log shows (xx'd out client ip): 2016/08/09 06:25:52 [error] 1081#0: *348 connect() to unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: example.com, request: "POST /mypath HTTP/1.1", upstream: "http://unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock:/mypath", host: "example.com"
Puma silent crash with nginx reverse proxy
1 The Dockerfile reference has the answer: https://docs.docker.com/engine/reference/builder/ More specifically, the HEALTHCHECK directive: https://docs.docker.com/engine/reference/builder/#healthcheck Essentially, when your container's entrypoint fails, the container dies: https://docs.docker.com/engine/reference/builder/#entrypoint But, in any case, a process running inside a container is also visible from the host's process list, so you can safely use the output of ps aux| grep httpd to monitor your apache's PIDs. Share Follow answered May 3, 2019 at 13:19 MarcelMarcel 1,27877 silver badges2121 bronze badges Add a comment  | 
I am new to Docker container and my question is how to monitor a process that is running inside a container. For Example, I have a container running apache in it. how would I know if apache process inside container got killed but my container is still running. How we will ensure specific process inside the container is running,if that process goes down how we will get alert ?
How to monitor a process that is running inside a container
Analyze your certificate:https://www.ssllabs.com/ssltest/analyze.html?d=rainbowchilli.co.uk&latestThis server is vulnerable to the POODLE attack.This server accepts the RC4 cipher, which is weak.The server does not support Forward Secrecy with the reference browsers.This server's certificate chain is incomplete.The most likely reason for the error is that the certificate authority that issued your SSL certificate is trusted on your desktop, but not on your mobile.This server's certificate chain is incomplete.https://superuser.com/questions/347588/how-do-ssl-chains-workThat's how you can get list of trusted certificates in Android:Android: List of available trusted root certificates
If I load my site with a desktop PC all is fine I believe and I get SSL working as it should:https://www.rainbowchilli.co.ukbut if I browse to it with Chrome on a Galaxy S4 phone or a Nexus 7 tablet I get SSL errors - why would this be and how do I fix it please?
Prestashop 1.6 SSL error on Mobiles/Tablets using Chrome
You cannot do a pull request if you don't have anything new in your code. Commit and push code on your fork, then you'll be able to create a pull request. If you want to catch up with the 107 missing commits, do the following (as explained here) git pull https://github.com/hexojs/hexo master
I want to create a PR request on the following repository HERE. Now i had forked the main repo as you can see HERE , as of now its 107 commits behind, there are no code changes in my forked version that i've made , my question how do i go about creating a PR request ? Do i need to rebase ? do i need to pull from remote on my local system ? How do i go about creating a PR request step by step ?
How to create a PR request on github when forked repo has't been updated?
You can use the single helm chart to manage all the deployment and config map.Create thetplfor deployment and service so this single tpl (template) will use to generate the multiple deployment YAML configs.So you will get the 3 YAML deployment file as output while you will be managing a single template file.For configmap also you can follow same and keep in single helm chart is it's working fine for you.For different environment you can mange the differentvaluesintovalues.yamlfile likedev-values.yaml&prod-values.yamlhelm install -f values.prod.p1.yaml helm-app
I plan to use Helm for deploying purposes. I have three applications/pods p1,p2,p3 and each of these has 2 enivronments dev, prod and in each environment there is a configmap.yml and deployment.yml.I plan on using helm, however how can I structure these. Do I need three helm charts?, one per application or is it possible to pack everything in one helm, considering the constraints.I thought of the following structure.+-- charts | \-- my-chart | +-- Chart.yaml # Helm chart metadata | +-- templates | \-- p1 +-- configmap1.yml +-- dep1.yaml ............................ similiary for p2,p3 | +-- values.yaml # default values | +-- values.dev.p1.yaml # development override values | +-- values.dev.p2.yaml | +-- values.dev.p3.yaml | +-- values.prod.p1.yaml # production override values | +-- values.prod.p2.yaml | +-- values.prod.p3.yamlNow if I want to deploy p1 in prod , then I simplyhelm install -f values.prod.p1.yaml helm-appWould this work is this the general convention?
How to structure Helm chart with different environments?
It appears that you are wanting one line per distinctdevice_id+timecombination.You can do this with some clever uses of theCASEcommand:select device_id, SUM(case when measure_name = 'ntu' then measure_value end) as ntu, SUM(case when measure_name = 'shutterspeed' then measure_value end) as shutterspeed, SUM(case when measure_name = 'intensity' then measure_value end) as intensity, time from table group by device_id, time
I would like to know how I can swap the row and column data It is original dataI would like to show like this by SQL query
SQL query swap column and row
Sorted, I had to reduce the complexity of the password used for my private key (for reasons only god and Microsoft know)... Uploaded the PFX and no issues...
I followed these instructions to upload a server cert issued by Thawte:http://msdn.microsoft.com/en-us/library/windowsazure/gg465712.aspxSo, I've got a PFX file and the cert complies with the requirements, that is:- Contain a private key (well it's a PFX...). - Purpose is Server Authentication. - Subject name match the domain name that is used to access the service. - Key size of 2048-bits.For some reason when I upload it, it returns an error stating: "Can't upload certificate. Please try again. If the problem persists, contact support".NB:- I can import that pfx to one of my local Windows machine with no problems. - I've generated the CSR using certreq (can't see any problem with that) - I included all certificates in the certificate path when I exported the PFXIf anyone can advise on how to resolve this issue it would be much appreciated.Thanks in advance.
Azure upload/install SSL certificate issue - "Can't upload certificate"
The simple solution is to backup the entire JENKINS_HOME folder. In case you need it for disaster recovery, just copy the whole thing back in. There is a number of files inside the JENKINS_HOME folder that are important to backup, such as the jobs folder which holds configuration of all the jobs, as well as a number of files that don't require backing up. If you want to go into detail, the official docs can give you the specifics of what needs backing up.
I'm planning to practice jenkins. I want to set up 2 jenkins server on ec2 instance and one is production server another is backup. I want the jobs in the production server automatically backed up to the backup server. In case of an disaster I want to restore it to the production server. Can anyone help me with how can I implement this in realtime I plan to connect them with ssh and run a script in backup that takes the jobs from production server using build triggers.
backup and restore to jenkins in case of disaster
15 UserPoolIdentityProvider was added in Oct 2019, official docs. Your CloudFormation would then look something like CognitoUserPoolIdentityProvider: Type: AWS::Cognito::UserPoolIdentityProvider Properties: ProviderName: Google AttributeMapping: email: emailAddress ProviderDetails: client_id: <yourclientid>.apps.googleusercontent.com client_secret: <yourclientsecret> authorize_scopes: email openid ProviderType: Google UserPoolId: Ref: CognitoUserPool Share Improve this answer Follow edited Jan 8, 2020 at 9:52 krutisfood 15366 bronze badges answered Oct 30, 2019 at 21:51 vgaltesvgaltes 1,1901212 silver badges1818 bronze badges 2 This worked for me but you have to place this declaration before the AWS::Cognito::UserPool declaration in the yml file or didn't work for me – marchinram Nov 10, 2019 at 22:37 What are you supposed to put if your user pool client does not have a secret? – Yu Chen Feb 5, 2021 at 4:52 Add a comment  | 
I want to setup a cognito user pool and configure my google identity provider automatically with a cloudformation yml file. I checked all the documentation but could not find anything even close to doing this. Any idea on how to do it?
Can I setup AWS Cognito User Pool Identity Providers with Cloudformation?
This website and this website contain information on the same problem. In order to keep your tables up to date, you must commit your transactions. Use db.commit() to do this. As mentioned by the post below me, you can remove the need for this by enabling auto-commit. this can be done by running db.autocommit(True) Also, auto-commit is enabled in the interactive shell, so this explains why you didn't have the problem there.
My Python program queries a set of tables in a MySQL DB, sleeps for 30 seconds, then queries them again, etc. The tables in question are continuously updated by a third-party, and (obviously) I would like to see the new results every 30 seconds. Let's say my query looks like this: "select * from A where A.key > %d" % maxValueOfKeyFromLastQuery Regularly I will see that my program stops finding new results after one or two iterations, even though new rows are present in the tables. I know new rows are present in the tables because I can see them when I issue the identical query from interactive mysql (i.e. not from Python). I found that the problem goes away in Python if I terminate my connection to the database after each query and then establish a new one for the next query. I thought maybe this could be a server-side caching issue as discussed here: Explicit disable MySQL query cache in some parts of program However: When I check the interactive mysql shell, it says that caching is on. (So if this is a caching problem, how come the interactive shell doesn't suffer from it?) If I explicitly execute SET SESSION query_cache_type = OFF from within my Python program, the problem still occurs. Creating a new DB connection for each query is the only way I've been able to make the problem go away. How can I get my queries from Python to see the new results that I know are there?
Chronic stale results using MySQLdb in Python
Those lines signal merge conflicts in Git. When you do a merge, git is generally good at automatically working out how to merge files together, however there are some cases where it cannot - for example, when both branches are adding to the same kind of area in the same file, you get a merge conflict. In these cases, those lines will be drawn around the boundary of the conflict. The section above the ======= belongs to the HEAD ref (or whatever is displayed after <<<<<<<). The section below belongs to the master ref (or whatever is displayed after >>>>>>>). It's up to you to delete these lines and make the according edit to the code. If you only want to take what is on the HEAD ref in the final version of the code (post-merge), then you delete everything below the ====== line - and visa versa if you want to only take what is on the master branch. Of course, you can also take both versions of the code by just removing the markers. You can see the git manual for more information.
I have been fumbling around on github, and now with some help I have managed to make my branch the local master. however, I get these lines that i guess are tracking where things have been changed. But I don't want them!! I really just want my current files to become the new master as they are. What exactly are these lines for? And how do I suppress them? <<<<<<< HEAD ======= >>>>>>> master
HEAD and master denotations using github
1 You May be doing "git add .", as you mentioned that all files are getting committed. just try doing "git add " for the file you want to stage and commit. and for merge conflict, it occurs when you and the other person you are working with make changes on the "same line" of the file, so git wouldn't know which change to accept yours or your partner's. Share Improve this answer Follow edited Sep 8, 2023 at 3:52 answered Sep 8, 2023 at 3:49 Nisarg PipaliyaNisarg Pipaliya 3144 bronze badges 1 In the case where there's a lot of files in the folder, is there a command that selects all files in the directory and add it to github? Thanks for your help! – Nhigami Sep 10, 2023 at 15:33 Add a comment  | 
Does anyone have this problem after cloning and making changes on the local, when trying to add and commit, it commits all of the files in the downloads instead of just the files in the project where I cloned the github repo to? I also experienced a lot of issue when I edit code on my local and my partner for the project pushed their code onto github but I didn't pull it, which leads to having merging problems. Is there a easy way to get around that to just not pull it and push my code to github? I've tried merge but often times it got confusing. As to the first question, I never came up with a solution except for adding the files one by one but even that leads to issue sometimes when they can't find the files in the project.
Issue with github: github adds all of my previously uncommited files to this repo when I do git add
One good way to debug the rewrites is to specify RewriteLog and RewriteLogLevel. You set log level up to 9 which logs quite a lot of things about the rewrites. Remember to disable the logging after debugging, because it is quite heavy for the apache process.RewriteLog documentationShareFollowansweredMay 30, 2011 at 9:24EinoEino9933 bronze badgesAdd a comment|
So I setted up the following lines in order to redirect some requests to my static domain:RewriteEngine On RewriteBase / RewriteRule ^img/(.*)$ http://static.mydomain.com/img/$1 [R=301] RewriteRule ^css/(.*)$ http://static.mydomain.com/css/$1 [R=301] RewriteRule ^js/(.*)$ http://static.mydomain.com/js/$1 [R=301,L]But for some reason, when I link to a, let's say, a picture:<img src="img/icons/hello.png">It's showing 404 when it really exists on static server (which actually means it's not being redirected).What am I doing wrong? I spent like two hours trying everything I know but no fix found.Thank you very much in advance. Here is my full htaccess file:<IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^img/(.*)$ http://static.mydomain.com/img/$1 [R=301] RewriteRule ^css/(.*)$ http://static.mydomain.com/css/$1 [R=301] RewriteRule ^js/(.*)$ http://static.mydomain.com/js/$1 [R=301,L] RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> <IfModule !mod_rewrite.c> ErrorDocument 404 /index.php </IfModule>
Htaccess for static content not working!
Swarm doesn't give the option to listen on a specific interface, it defaults to listen on all interfaces. This is anopen issue. Modifying overlay networks inside of docker will not change this behavior.
When I launch an app via docker I can publish the app on a port specifying the IP.Suppose that my server has two ip (private 192.168.0.2 and public 200.168.0.2), I can expose an app on the private ip with this command:docker run -it -p 192.168.0.2:80:80 nginxHow can I achieve something similar with docker swarm?I guess I must create a docker network layer first, but I don't understand what the right syntax is.Basically I would like to do something similar:docker network create \ --driver overlay \ --IP 192.168.0.2 \ --IP 192.167.0.1 \ private_net docker service create --replicas 2 \ --network private_net --name my-web nginxWhere 192.168.0.2 and 192.167.0.1 are the IPs of the swarm cluster servers.
Docker Swarm and private IP
1 You cannot change status code or headers through Lambda Authorizer, but you can do it through Gateway Responses. Go to API Gateway Select the API you want to change On the left panel, select Gateway Responses Select the Response you want to override (e.g. UNAUTHORIZED) Click Edit on the right panel You can now redefine the response status, headers and body as you like. Full documentation: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-gateway-response-using-the-console.html Share Improve this answer Follow answered Dec 1, 2021 at 23:00 paulodiovanipaulodiovani 1,26822 gold badges1616 silver badges3434 bronze badges 1 Is there a similar thing of Gateway Responses in AWS API Gateway HTTP (not REST) API? I search but can't find it. – Alexander Ites Aug 18, 2022 at 12:56 Add a comment  | 
I have a question about API Gateway authorizers and lambda functions. My scenario is the following: I have a resource in AWS API Gateway for which the authorization is enabled. The authorizer calls a lambda function which, if the user is not authorized, redirects the user to another URL. So basically I would like to customize the authorizer to return a 302 rather than a 401/403/500 status code. Do you know if that is possible? I know that having a lambda function in the integration phase of the gateway lets me customize the response. What about this particular scenario? Thanks.
API Gateway lambda authorizer custom status code
27 /var/run/docker will be created when you start the docker service: systemd: sudo systemctl start docker upstart: sudo service docker start init.d: sudo /etc/init.d/docker start You might also need this if you get this error: FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host? Share Improve this answer Follow edited Jul 20, 2015 at 18:50 answered Feb 10, 2015 at 4:36 MichaelMichael 10.2k11 gold badge3737 silver badges5151 bronze badges 2 when I ran service docker status I get : docker is not running, and when I ran service docker start - I got mount: permission denied – Eugene Goldberg Feb 10, 2015 at 15:39 did you use sudo or start it as root? – Michael Feb 10, 2015 at 17:47 Add a comment  | 
when I installed docker initially, it shows to be of version 1.0.1 Being, that the current version is 1.4.1, I found and executed the following instructions: $ sudo apt-get update $ sudo apt-get install docker.io $ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 $ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main \ > /etc/apt/sources.list.d/docker.list" $ sudo apt-get update $ sudo apt-get install lxc-docker Now, when I run docker version I get 1.4.1, but docker no longer works - it gives me this error: root@8dedd2fff58e:/# docker version Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.3.3 Git commit (client): 5bc2ff8 OS/Arch (client): linux/amd64 FATA[0000] Get http:///var/run/docker.sock/v1.16/version: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS? What can I do to fox this, but retail the most current docker verion 1.4.1?
dial unix /var/run/docker.sock: no such file or directory after upgrading to lxc-docker
From the linux command prompt issue the command:/usr/lib/jvm/jre/bin/keytool -import -alias <> -file <> -keystore cacertsThat command uses the Java keystore tool to import the new cert file into the existing cacerts file. The <> is whatever you want to call the cert. The <> is the actual file you want imported.If you are prompted for a password, the default keystore password is 'changeit'.Repeat for each new cert file you want added.
Recently our server got upgraded to SHA-256 based SSL certificate. And from then we are facingjavax.naming.CommunicationException. In order to resolve this issue i need to add/append a set of Certificate Chain into CACERTS file under the path/usr/lib/jvm/jre/lib/securityof our server.I found thislinkof SO which explains the steps to achieve this through a program. Can any one suggest how to add these certificate chains into the cacerts file through linux commands.
How to integrate SSL certificates to the cacerts file in /jre/security folder?
You can pass parameters to the SonarScanner for MSBuild either on the command line or in theSonarQube.Analysis.xmlXML settings file as described in thedocs.
I have a simple C# Hello World project which I am trying to do Code Quality analysis with SonarQube.dotnet C:\git\itergo\sonar\sonar-scanner-msbuild-4.7.1.2311-netcoreapp2.0\SonarScanner.MSBuild.dll endthrows an error.sonar-project.properties files are not understood by the SonarScanner for MSBuild. Remove those files from the following folders: C:\git\sonar_test\samples\core\console-apps\HelloMsBuild 09:24:17.089 Post-processing failed. Exit code: 1
Sonarqube.project.properties file not allowed
No, you cannot do that using YAML. The only inheritance like feature in YAML is theMerge Key Language Independent Typeand that only works withing one YAML document, not between multiple documents in the same YAML file (separated by---) and certainly not between different YAML files.Howeverdocker-composereadsdocker-compose.ymland if availabledocker-compose.override.yml, where the values in the second file (if available) override the ones in the first. Combined with the-foption to specify an input YAML file fordocker-composeyou can use a shared base file with different overrides.This is a feature ofdocker-composeand is done on the data loaded from the YAML files, not by combining the YAML files and then loading them.
I have a lot of services, which use the same basic configuration in docker-compose. Actually most of the configuration is the same, with some minor tweaks.I have seen that it is somehow possible to inherit values in YAML. Can I use this in docker-compose to define a "default-service" and use this all over in the other services for e.g.docker-compose run? How would I do this?
use inheritance in docker-compose.yml
I assume you can. Download manifest -https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/v1.5/aws-k8s-cni.yamlAnd runkubectl apply -f aws-k8s-cni.yamlDid you checkhttps://github.com/aws/amazon-vpc-cni-k8s?
I am planning to set up a Kubernetes cluster in AWS without using EKS. Since EKS providesAmazon VPC CNIfor managing networking at pod level, which provides better networking, I am planning to use the same.I need to know, whether it is possible my Kubernetes cluster withAmazon VPC CNI, if yes, can somebody provide me the document or how to perform it.
Can we use AWS VPC CNI on Kubernetes cluster in AWS when not using EKS
1) access to the logs from previously deployed container which can be stopped/destroyed. Rsyslog/syslog? One option is to send your logs to an ELK stack . Using logtash-forwarder or log-courier or Beaver 2) how easy to rollback a deployment? Is that safe in terms of dropped requests? Can you send USR2 + QUIT signals to the image but keep starting new master/workers with another image gracefully? Nginx upstream with multiple image ports? This is a good question. So if you want to terminate your sessions gracefully you would have to talk to the container running unicorn or whatever and issue the USR2 + QUIT signals inside the container to handle session termination gracefully. Containers are pretty lightweight so instead of restarting your nginx/unicorn you could just instantiate new containers with new code and terminate the nginx/unicorn process before terminating the container with old code. The trick here is the mechanism to manage containers and issue commands inside the container. Not sure but I think Kubernetes may have a mechanism for this. 3) how to provision Dockerfile with Ansible or alternatives? Otherwise what are pitfalls of Dockerfile bash-like style? This is more of how you would like to do it depending on your imagination. You can template Dockerfiles and have ansible run docker build. Or you can just use something like the Ansible Docker module . Dockerfile in essence is a runbook to build containers and can be modified, put into source control, etc. 4) what is the best way to access to Rails console through Docker? Assign a pseudo-tty to your container and make it interactive. Then you can run: docker attach <container-id> to attach to the container and then just run your bundle exec rails console command. Alternatively, you can make your container's process number 1 the sshd process and you can then ssh to the container and run bundle exec rails console. This is how tools like test-kitchen with the docker-driver do it.
Read many resources but still confused about Docker from deployment point of view. Trying to find out the best practices for Rails app within Docker environment, particularly interested how to solve the following problems: 1) access to the logs from previously deployed container which can be stopped/destroyed. Rsyslog/syslog? 2) how easy to rollback a deployment? Is that safe in terms of dropped requests? Can you send USR2+QUIT signals to the image but keep starting new master/workers with another image gracefully? Nginx upstream with multiple image ports? 3) how to provision Dockerfile with Ansible or alternatives? Otherwise what are pitfalls of Dockerfile bash-like style? 4) what is the best way to access to Rails console through Docker?
Deployment Rails app for Docker
Yes, they're very different. The first is really a single array; the second is actually var+1 arrays, potentially scattered all over your RAM. var arrays hold the data, and one holds pointers to the var data arrays.
What is the difference between two arrays definitions? Are they realized different in memory? int var = 5; int (*p4)[2] = new int [var][2]; // first 2d array int** p5 = new int*[var]; // second 2d array for(int i = 0; i < var; ++i){ p5[i] = new int[2]; }
Difference between two methods of creating of 2d array
The message indicates that the browser didn’t accept the certificate and can happen when you got the wrong domain, the certificate expired (not your case) or the browser doesn't recognize the certificate authority from where the certificate came from (most likely that's the case).When creating a self signed SSL certificate, the browser (Firefox) can also returnInvalid Certification AuthorityAnd the Chrome can sayNot secureThis is fine since one uses a self-signed certificate.ShareFollowansweredMar 8, 2020 at 10:07Tiago Martins PeresTiago Martins Peres14.8k2020 gold badges9292 silver badges154154 bronze badgesAdd a comment|
I want to test my website in Mac with localhost.I followedthis stepto generatelocalhost.crtandlocalhost.key. Then, I double-clickedlocalhost.crtto insert it inSystemofKeychainsand then setAlways Trust.Then, I launched the website in Chrome. I still seeYour connection to this site is not secureandCertificate (Invalid).Does anyone know how to fix this?
Self-signed root certificate is still not valid
First you could try the following expression:count(LATENCY{status="CRITICAL"}) > 0If it doesn't work as expected, then try the following one:count(LATENCY{status="CRITICAL"} or vector(0)) > 1
I have metric,LATENCYand label,status. I want to fire an alert whenLATENCYhasstatus=CRITICALLATENCY{status="CRITICAL"}LATENCYstatus will be critical only if latency is beyond a threshold. How to check if there is at least one time series withLATENCY{status="CRITICAL"}?I usedexpr: absent(LATENCY{status="CRITICAL"}) == 0, but it doesn't work.
Prometheus : How to check if there is atleast one time series for a given metric and label combination?
The error is getting because thedisk_size_gbmodule must be in the node_config block, as the following.node_config { disk_size_gb = 200 }The TerraForm documentation aboutgoogle_container_clusterthe module needs to be under the block.
I am trying to add the boot disk size to the node auto-provisioned Kubernetes cluster as follows:resource "google_container_cluster" "gc-dev-kube-ds0" { . . . cluster_autoscaling { enabled = true resource_limits { resource_type = "cpu" minimum = 4 maximum = 150 } resource_limits { resource_type = "memory" minimum = 4 maximum = 600 } resource_limits { resource_type = "nvidia-tesla-v100" minimum = 0 maximum = 4 } } disk_size_gb = 200 }but I am getting the following error:Error: Unsupported argument on kubernetes.tf line 65, in resource "google_container_cluster" "gc-dev-kube-ds0": 65: disk_size_gb = 200 An argument named "disk_size_gb" is not expected here.Also checked the terraformdocumentationbut nothing is mentioned on this.
Setting boot disk size for autoscaling kubernetes cluster through Terraform
Assuming you have access to the httpd.conf file or your virtual host configuration, You can add Directory sections with Wildcards, orDirectoryMatchsections to accomplish this in your httpd.conf file. You're probably looking for something akin to:<Directory /home/snippets/*> Options -Indexes </Directory>Make sure you read up on how various configuration settings aremerged.
I've a folder structure like:home | snippets---other----bla | | | a--b--c d--e--f g--h--i | | | | | | | | | filesfilesfilesfilesfilesfilesI've default files (index.html) in most folders, but for the folders without default files, I used "Options -Indexes" in the home .htaccess to generate a 403 error. In the snippets folder, I use custom directory listing.<IfModule mod_autoindex.c> Options +Indexes IndexOptions IgnoreCase VersionSort SuppressHTMLPreamble ReadmeName ../index_end.html HeaderName ../index_begin.html </IfModule>Inside the snippets folder Options +Indexes rules. However, I want to prevent directory listing in folder a,b,c. Is there a solution which doesn't needs an htaccess file for every folder? I only have access to htaccess
Directory listing for one folder htaccess
I ran the below command to get the same. I hope it solves the issue quite easily.alias util='kubectl get nodes --no-headers | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''[mynode ~]# utilmy-control-01 Resource Requests Limits cpu 3683m (46%) 6848m (85%) memory 5188Mi (21%) 8370Mi (35%)my-edge-01 Resource Requests Limits cpu 4 (100%) 16 (400%) memory 1Gi (13%) 4Gi (53%)my-edge-02 Resource Requests Limits cpu 4 (100%) 16 (400%) memory 1Gi (13%) 4Gi (53%)my-worker-01 Resource Requests Limits cpu 7810m (97%) 27750m (346%) memory 11066329600 (66%) 20814538Ki (128%)my-worker-02 Resource Requests Limits cpu 6051m (75%) 13160m (164%) memory 12554Mi (79%) 17728Mi (112%)
I am using different opensource components by pulling helm charts and installing. Now, after some time while deploying one custom helm chart I am getting resources unavailable. So, rather than counting manually is there any way to know total reserved resources.So, How to get total reserved CPU and memory reserved by Kubernetes deployments/daemonsets/statefulsets.(where limits and requests for cpu and memory is same)
Getting CPU and Memory hard limit count in Kubernetes
After doing some research I ended up with the following solution using classio.vertx.core.http.HttpServerRequestprivate X509Certificate extractCertificate(HttpServerRequest req) throws SSLPeerUnverifiedException { X509Certificate[] certs = req.connection().peerCertificateChain(); if (null != certs && certs.length > 0) { return certs[0]; } throw new RuntimeException("No X.509 client certificate found in request"); }ShareFollowansweredMar 27, 2020 at 3:51Facundo LarrosaFacundo Larrosa3,36922 gold badges1414 silver badges2323 bronze badgesAdd a comment|
Following QuarkusGetting Startedguide andenabling SSLthe next step I wanted to do was to get the client certificate chain.I would like to do something like this:private X509Certificate extractCertificate(HttpServletRequest req) { X509Certificate[] certs = (X509Certificate[]) req.getAttribute("javax.servlet.request.X509Certificate"); if (null != certs && certs.length > 0) { return certs[0]; } throw new RuntimeException("No X.509 client certificate found in request"); }Following the getting started guide injecting HttpServletRequest is not straight forward as described in thisissueHow would be the way to have access to the client certificate chain then?
How to retrieve the client certificate from a request in a REST service using Quarkus in two way SSL
Either remove network fromdocker-compose.yml(so default is used) or adddb-networktoserver:services: server: build: ./Test ports: - "8000:80" depends_on: - db networks: - db-networkShareFolloweditedJan 28, 2023 at 11:24answeredJan 28, 2023 at 9:40Guru StronGuru Stron122k1111 gold badges124124 silver badges160160 bronze badges3That still gives the same error. I have also tried clearing all the cache but its still the same.–Isaac CookeJan 28, 2023 at 10:25@IsaacCooke try removing network specification or add network to the service.–Guru StronJan 28, 2023 at 10:31How would I do that? I tried removing the network fields in docker-compose.yml but I still had the same error.–Isaac CookeJan 28, 2023 at 10:36Add a comment|
I have been trying to connect my webapi built in ASP.NET Core 7.1 to a postgresql database. It is inside a docker container. However, every time I rundocker-compose -f docke-compose.yml up, I get the following error:Unhandled exception. System.Net.Sockets.SocketException (00000001, 11): Resource temporarily unavailableI assume this means that something has gone wrong with the database connection but I don't know how to fix it. Here is my docker-compose.ymlversion: '3.8' services: server: build: ./Test ports: - "8000:80" depends_on: - db db: container_name: db image: postgres:latest environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=pass - POSTGRES_DB=data volumes: - pgdata:/var/lib/postgresql/data ports: - "1234:5432" networks: - db-network networks: db-network: driver: bridge volumes: pgdata:And here is my appsettings.json from the backend{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "ConnectionStrings": { "Data": "Host=db;Port=5432;Database=data;User ID=user;Password=pass" }, "AllowedHosts": "*" }I have tried changing the connection strings, password and user id but I keep getting the same error.
How to connect an ASP.NET backend to a postgres database inside a docker container
There are two ways to handle this.Call out to CLI utilitiesThis requires that you supply the contents of thekrb5-workstationand its dependency,libkadm5, in your deployment package or via a Layer.Launch an EC2 instance from theLambda execution environment's AMIUpdate all packages:sudo yum updateInstall the MIT Kerberos utilities:sudo yum install krb5-workstationMake theLayer skeleton:mkdir bin libPopulate the binaries:rpm -ql krb5-workstation | grep bin | xargs -I %% cp -a %% binPopulate their libraries:rpm -ql libkadm5 | xargs -I %% cp -a %% libPrepare the Layer:zip -r9 krb5-workstation-layer.zip bin libCreate the Layer and reference it from your Lambda function.Invoke (e.g.)/opt/bin/kinitfrom inside your function.Do it nativelyIt turns out that if your code callsgss_acquire_cred, which most code does, usually through bindings and an abstraction layer, you don't need the CLI utilities.Supply a client keytab file to your function, either by bundling it with the deployment package or (probably better) fetching it from S3 + KMS.Set theKRB5_CLIENT_KTNAMEenvironment variable to the location of the keytab file.Requested addendumIn either case, if you find you have a need to specify additional Kerberos configuration, see thekrb5.confdocs for details. If/etcis off the table, then "Multiple colon-separated filenames may be specified in [the] KRB5_CONFIG [environment variable]; all files which are present will be read."
I have an aws lambda function(nodejs) right now that writes some data to a test kafka cluster. The one thats in production use's kerberos for auth so I was wondering if there was a way to setup my lambda function to authenticate with kerberos. I wasn't able to find much online regarding this...
kerberos authentication in lambda function
1 The best way to make a simple problem like this memory safe is to add a destructor to node and delete the root node at the end of the program. Because root is allocated on the stack you will have a memory leak at the end currently. Here's what the definition should somewhat look like. ~Node() { //call delete on every pointer in the struct delete next; delete left; delete right; } Then at the end of your program you can call delete root and the dtor will be called, recursively deleting every node below it. Even if you use shared_ptr or unique_ptr instead of calling delete you still need the dtor, otherwise all your child nodes will remain allocated when root is deleted. Share Improve this answer Follow answered Feb 27, 2015 at 15:43 SamSam 10111 silver badge22 bronze badges 3 I found a way by making the left, right, next pointers themselves as unique_ptr . so if the root goes outside scope and gets deleted, it will automatically delete other nodes, so i won't have to write the dtor. does that seem right ? – user775093 Feb 27, 2015 at 19:58 That will work as long as root is a unique_ptr or you delete it at the end. – Sam Feb 27, 2015 at 20:36 If you do this, you should also mark copy semantics (copy constructor and copy assignment operator) of Node as deleted. If not, some bad things will happen whrn you copy Nodes and then delete them. Copy semantics will automatically be deleted if you use unique_ptrs in you Node objects, since a delete root0 has no copy semantics. – Laurent LA RIZZA Mar 3, 2015 at 8:06 Add a comment  | 
I solve algoritm questions from sites like leetcode, hacker rank or cracking the coding interview. I do most if the questions in c++. So for most of them i have a node struct as below struct Node { Node* next; //for tree Node* left; Node* right; int data; //ctor Node(int val) : left(nullptr);..... }; then i have a function(s) which implements the algorithm bool someAlgorithm(Node* root) { //do stuff } and finally i create the nodes in the main int main() { auto root = new Node(4); root->left = new .. root->left->left = new .. } I want to incorporate memory management in this kind of solutions. If i use c++11 shared_ptr do i need to provide a destructor ? if yes what should i write in the destructor ? But i found that shared_ptr makes code overly complex and un-understandbale for such small programs. In general what is the best way to make solving such questions memory safe ?
memory management for linked list and tree programs in c++
1 At the end of the day, there isn't a big difference between bind mount and Docker named volumes. I tend to prefer keeping persistent data from Docker services in Docker volumes. You can then use tools like docker system df -v to inspect what your application uses. As for exporting the data, you can use docker cp docker cp someContainer:/somedir/ . Share Follow answered Mar 19, 2019 at 12:29 BernardBernard 16.7k1212 gold badges6565 silver badges6868 bronze badges Add a comment  | 
Option 1: (named container. the volume is identified by its name. It store its data in the /var/lib/docker/volumes/nameofthevolume) # create the volume in advance $ docker volume create test_vol Option: 2 (here name of the volume bind-test does not matter, what matter is which local path /home/user/test it mounts to, which is persistant. Rather than /var/lib/docker/volume/somevolumename /home/user/somedatafolder makes more readability. Cons: we have to ensure that the /home/user/somedatafolder exists.) # inside a docker-compose file ... volumes: bind-test: driver: local driver_opts: type: none o: bind device: /home/user/test or: version: '3' services: myservice: volumes: - ./path:/volume/path The downside of bind mounts is that it places files that are managed by containers, with the uid/gid from the container, inside a path likely used by other users on the host, often with a different uid/gid on the host. The result is permission issues either on the host or inside the container. You need to align uid/gid's between the two to avoid this.
postgresql persist data: which is better named volume or bind mount